2026-04-11 01:38:05.172400 | Job console starting 2026-04-11 01:38:05.185730 | Updating git repos 2026-04-11 01:38:05.253156 | Cloning repos into workspace 2026-04-11 01:38:05.476403 | Restoring repo states 2026-04-11 01:38:05.493335 | Merging changes 2026-04-11 01:38:05.493357 | Checking out repos 2026-04-11 01:38:05.765679 | Preparing playbooks 2026-04-11 01:38:06.504257 | Running Ansible setup 2026-04-11 01:38:10.976548 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-11 01:38:11.717598 | 2026-04-11 01:38:11.717761 | PLAY [Base pre] 2026-04-11 01:38:11.735048 | 2026-04-11 01:38:11.735188 | TASK [Setup log path fact] 2026-04-11 01:38:11.765264 | orchestrator | ok 2026-04-11 01:38:11.782784 | 2026-04-11 01:38:11.782953 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-11 01:38:11.823932 | orchestrator | ok 2026-04-11 01:38:11.836473 | 2026-04-11 01:38:11.836584 | TASK [emit-job-header : Print job information] 2026-04-11 01:38:11.891633 | # Job Information 2026-04-11 01:38:11.891913 | Ansible Version: 2.16.14 2026-04-11 01:38:11.891974 | Job: testbed-upgrade-stable-ubuntu-24.04 2026-04-11 01:38:11.892051 | Pipeline: periodic-midnight 2026-04-11 01:38:11.892091 | Executor: 521e9411259a 2026-04-11 01:38:11.892126 | Triggered by: https://github.com/osism/testbed 2026-04-11 01:38:11.892163 | Event ID: cc93a31476bb4705a7fc1e8d570633f7 2026-04-11 01:38:11.902027 | 2026-04-11 01:38:11.902168 | LOOP [emit-job-header : Print node information] 2026-04-11 01:38:12.031940 | orchestrator | ok: 2026-04-11 01:38:12.032237 | orchestrator | # Node Information 2026-04-11 01:38:12.032294 | orchestrator | Inventory Hostname: orchestrator 2026-04-11 01:38:12.032337 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-11 01:38:12.032373 | orchestrator | Username: zuul-testbed03 2026-04-11 01:38:12.032407 | orchestrator | Distro: Debian 12.13 2026-04-11 01:38:12.032448 | orchestrator | Provider: static-testbed 2026-04-11 01:38:12.032482 | orchestrator | Region: 2026-04-11 01:38:12.032516 | orchestrator | Label: testbed-orchestrator 2026-04-11 01:38:12.032549 | orchestrator | Product Name: OpenStack Nova 2026-04-11 01:38:12.032580 | orchestrator | Interface IP: 81.163.193.140 2026-04-11 01:38:12.061193 | 2026-04-11 01:38:12.061375 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-11 01:38:12.543302 | orchestrator -> localhost | changed 2026-04-11 01:38:12.560615 | 2026-04-11 01:38:12.560768 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-11 01:38:13.626067 | orchestrator -> localhost | changed 2026-04-11 01:38:13.655311 | 2026-04-11 01:38:13.655478 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-11 01:38:13.965337 | orchestrator -> localhost | ok 2026-04-11 01:38:13.972942 | 2026-04-11 01:38:13.973086 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-11 01:38:14.009920 | orchestrator | ok 2026-04-11 01:38:14.030595 | orchestrator | included: /var/lib/zuul/builds/85a52db06bb14d3cb1db1d0bd460f0db/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-11 01:38:14.038901 | 2026-04-11 01:38:14.039049 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-11 01:38:15.745331 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-11 01:38:15.745763 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/85a52db06bb14d3cb1db1d0bd460f0db/work/85a52db06bb14d3cb1db1d0bd460f0db_id_rsa 2026-04-11 01:38:15.745850 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/85a52db06bb14d3cb1db1d0bd460f0db/work/85a52db06bb14d3cb1db1d0bd460f0db_id_rsa.pub 2026-04-11 01:38:15.745904 | orchestrator -> localhost | The key fingerprint is: 2026-04-11 01:38:15.745952 | orchestrator -> localhost | SHA256:bNn+438DGId5glpIFTLDRSKlX3VxHUZlz/ZShBdJPTg zuul-build-sshkey 2026-04-11 01:38:15.746018 | orchestrator -> localhost | The key's randomart image is: 2026-04-11 01:38:15.746085 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-11 01:38:15.746133 | orchestrator -> localhost | | .o*+=.. o=O@| 2026-04-11 01:38:15.746178 | orchestrator -> localhost | | oo= . .E+*=| 2026-04-11 01:38:15.746220 | orchestrator -> localhost | | .. ... o ..*| 2026-04-11 01:38:15.746261 | orchestrator -> localhost | | .o.= = o o.| 2026-04-11 01:38:15.746302 | orchestrator -> localhost | | .S . * . .| 2026-04-11 01:38:15.746358 | orchestrator -> localhost | | o . . . . | 2026-04-11 01:38:15.746402 | orchestrator -> localhost | | . . | 2026-04-11 01:38:15.746443 | orchestrator -> localhost | | .. ..| 2026-04-11 01:38:15.746486 | orchestrator -> localhost | | .oo...| 2026-04-11 01:38:15.746529 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-11 01:38:15.746638 | orchestrator -> localhost | ok: Runtime: 0:00:01.203285 2026-04-11 01:38:15.758655 | 2026-04-11 01:38:15.758781 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-11 01:38:15.791910 | orchestrator | ok 2026-04-11 01:38:15.803788 | orchestrator | included: /var/lib/zuul/builds/85a52db06bb14d3cb1db1d0bd460f0db/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-11 01:38:15.813138 | 2026-04-11 01:38:15.813237 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-11 01:38:15.836829 | orchestrator | skipping: Conditional result was False 2026-04-11 01:38:15.846048 | 2026-04-11 01:38:15.846169 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-11 01:38:16.484699 | orchestrator | changed 2026-04-11 01:38:16.495154 | 2026-04-11 01:38:16.495303 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-11 01:38:16.805361 | orchestrator | ok 2026-04-11 01:38:16.814216 | 2026-04-11 01:38:16.814345 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-11 01:38:17.273146 | orchestrator | ok 2026-04-11 01:38:17.283025 | 2026-04-11 01:38:17.283334 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-11 01:38:17.727981 | orchestrator | ok 2026-04-11 01:38:17.736011 | 2026-04-11 01:38:17.736142 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-11 01:38:17.761118 | orchestrator | skipping: Conditional result was False 2026-04-11 01:38:17.773149 | 2026-04-11 01:38:17.773294 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-11 01:38:18.203277 | orchestrator -> localhost | changed 2026-04-11 01:38:18.218342 | 2026-04-11 01:38:18.218470 | TASK [add-build-sshkey : Add back temp key] 2026-04-11 01:38:18.576943 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/85a52db06bb14d3cb1db1d0bd460f0db/work/85a52db06bb14d3cb1db1d0bd460f0db_id_rsa (zuul-build-sshkey) 2026-04-11 01:38:18.577543 | orchestrator -> localhost | ok: Runtime: 0:00:00.019995 2026-04-11 01:38:18.593611 | 2026-04-11 01:38:18.593770 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-11 01:38:19.037624 | orchestrator | ok 2026-04-11 01:38:19.046093 | 2026-04-11 01:38:19.046227 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-11 01:38:19.080908 | orchestrator | skipping: Conditional result was False 2026-04-11 01:38:19.141234 | 2026-04-11 01:38:19.141374 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-11 01:38:19.597594 | orchestrator | ok 2026-04-11 01:38:19.614793 | 2026-04-11 01:38:19.615073 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-11 01:38:19.663807 | orchestrator | ok 2026-04-11 01:38:19.675165 | 2026-04-11 01:38:19.675334 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-11 01:38:19.976397 | orchestrator -> localhost | ok 2026-04-11 01:38:19.992320 | 2026-04-11 01:38:19.992483 | TASK [validate-host : Collect information about the host] 2026-04-11 01:38:21.307418 | orchestrator | ok 2026-04-11 01:38:21.324560 | 2026-04-11 01:38:21.324691 | TASK [validate-host : Sanitize hostname] 2026-04-11 01:38:21.401660 | orchestrator | ok 2026-04-11 01:38:21.410502 | 2026-04-11 01:38:21.410650 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-11 01:38:22.003688 | orchestrator -> localhost | changed 2026-04-11 01:38:22.016845 | 2026-04-11 01:38:22.017049 | TASK [validate-host : Collect information about zuul worker] 2026-04-11 01:38:22.493498 | orchestrator | ok 2026-04-11 01:38:22.502519 | 2026-04-11 01:38:22.502678 | TASK [validate-host : Write out all zuul information for each host] 2026-04-11 01:38:23.085572 | orchestrator -> localhost | changed 2026-04-11 01:38:23.096949 | 2026-04-11 01:38:23.097112 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-11 01:38:23.450518 | orchestrator | ok 2026-04-11 01:38:23.461286 | 2026-04-11 01:38:23.461418 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-11 01:38:46.446914 | orchestrator | changed: 2026-04-11 01:38:46.448216 | orchestrator | .d..t...... src/ 2026-04-11 01:38:46.448426 | orchestrator | .d..t...... src/github.com/ 2026-04-11 01:38:46.448512 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-11 01:38:46.448576 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-11 01:38:46.448636 | orchestrator | RedHat.yml 2026-04-11 01:38:46.474220 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-11 01:38:46.474239 | orchestrator | RedHat.yml 2026-04-11 01:38:46.474295 | orchestrator | = 2.2.0"... 2026-04-11 01:38:56.392050 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-11 01:38:56.412000 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-11 01:38:56.572658 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-11 01:38:56.956034 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-11 01:38:57.025203 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-11 01:38:57.794984 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-11 01:38:57.867284 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-11 01:38:58.554619 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-11 01:38:58.554747 | orchestrator | 2026-04-11 01:38:58.554760 | orchestrator | Providers are signed by their developers. 2026-04-11 01:38:58.554769 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-11 01:38:58.554777 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-11 01:38:58.554788 | orchestrator | 2026-04-11 01:38:58.554796 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-11 01:38:58.554803 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-11 01:38:58.554827 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-11 01:38:58.554835 | orchestrator | you run "tofu init" in the future. 2026-04-11 01:38:58.555063 | orchestrator | 2026-04-11 01:38:58.555088 | orchestrator | OpenTofu has been successfully initialized! 2026-04-11 01:38:58.555095 | orchestrator | 2026-04-11 01:38:58.555108 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-11 01:38:58.555116 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-11 01:38:58.555130 | orchestrator | should now work. 2026-04-11 01:38:58.555137 | orchestrator | 2026-04-11 01:38:58.555144 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-11 01:38:58.555151 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-11 01:38:58.555158 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-11 01:38:58.719474 | orchestrator | Created and switched to workspace "ci"! 2026-04-11 01:38:58.719542 | orchestrator | 2026-04-11 01:38:58.719550 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-11 01:38:58.719558 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-11 01:38:58.719564 | orchestrator | for this configuration. 2026-04-11 01:38:58.861320 | orchestrator | ci.auto.tfvars 2026-04-11 01:38:58.905126 | orchestrator | default_custom.tf 2026-04-11 01:38:59.926963 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-11 01:39:00.499352 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-11 01:39:00.799296 | orchestrator | 2026-04-11 01:39:00.799382 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-11 01:39:00.799400 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-11 01:39:00.799439 | orchestrator | + create 2026-04-11 01:39:00.799463 | orchestrator | <= read (data resources) 2026-04-11 01:39:00.799482 | orchestrator | 2026-04-11 01:39:00.799490 | orchestrator | OpenTofu will perform the following actions: 2026-04-11 01:39:00.799642 | orchestrator | 2026-04-11 01:39:00.799664 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-11 01:39:00.799672 | orchestrator | # (config refers to values not yet known) 2026-04-11 01:39:00.799678 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-11 01:39:00.799685 | orchestrator | + checksum = (known after apply) 2026-04-11 01:39:00.799692 | orchestrator | + created_at = (known after apply) 2026-04-11 01:39:00.799698 | orchestrator | + file = (known after apply) 2026-04-11 01:39:00.799704 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.799735 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.799742 | orchestrator | + min_disk_gb = (known after apply) 2026-04-11 01:39:00.799749 | orchestrator | + min_ram_mb = (known after apply) 2026-04-11 01:39:00.799755 | orchestrator | + most_recent = true 2026-04-11 01:39:00.799762 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.799768 | orchestrator | + protected = (known after apply) 2026-04-11 01:39:00.799775 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.799785 | orchestrator | + schema = (known after apply) 2026-04-11 01:39:00.799791 | orchestrator | + size_bytes = (known after apply) 2026-04-11 01:39:00.799798 | orchestrator | + tags = (known after apply) 2026-04-11 01:39:00.799804 | orchestrator | + updated_at = (known after apply) 2026-04-11 01:39:00.799810 | orchestrator | } 2026-04-11 01:39:00.799931 | orchestrator | 2026-04-11 01:39:00.799951 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-11 01:39:00.799958 | orchestrator | # (config refers to values not yet known) 2026-04-11 01:39:00.799965 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-11 01:39:00.799971 | orchestrator | + checksum = (known after apply) 2026-04-11 01:39:00.799978 | orchestrator | + created_at = (known after apply) 2026-04-11 01:39:00.799984 | orchestrator | + file = (known after apply) 2026-04-11 01:39:00.799990 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.799996 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.800003 | orchestrator | + min_disk_gb = (known after apply) 2026-04-11 01:39:00.800009 | orchestrator | + min_ram_mb = (known after apply) 2026-04-11 01:39:00.800015 | orchestrator | + most_recent = true 2026-04-11 01:39:00.800021 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.800028 | orchestrator | + protected = (known after apply) 2026-04-11 01:39:00.800034 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.800040 | orchestrator | + schema = (known after apply) 2026-04-11 01:39:00.800046 | orchestrator | + size_bytes = (known after apply) 2026-04-11 01:39:00.800052 | orchestrator | + tags = (known after apply) 2026-04-11 01:39:00.800058 | orchestrator | + updated_at = (known after apply) 2026-04-11 01:39:00.800065 | orchestrator | } 2026-04-11 01:39:00.800179 | orchestrator | 2026-04-11 01:39:00.800219 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-11 01:39:00.800228 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-11 01:39:00.800234 | orchestrator | + content = (known after apply) 2026-04-11 01:39:00.800241 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-11 01:39:00.800247 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-11 01:39:00.800254 | orchestrator | + content_md5 = (known after apply) 2026-04-11 01:39:00.800260 | orchestrator | + content_sha1 = (known after apply) 2026-04-11 01:39:00.800266 | orchestrator | + content_sha256 = (known after apply) 2026-04-11 01:39:00.800272 | orchestrator | + content_sha512 = (known after apply) 2026-04-11 01:39:00.800279 | orchestrator | + directory_permission = "0777" 2026-04-11 01:39:00.800285 | orchestrator | + file_permission = "0644" 2026-04-11 01:39:00.800291 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-11 01:39:00.800297 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.800303 | orchestrator | } 2026-04-11 01:39:00.800415 | orchestrator | 2026-04-11 01:39:00.800433 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-11 01:39:00.800441 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-11 01:39:00.800447 | orchestrator | + content = (known after apply) 2026-04-11 01:39:00.800453 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-11 01:39:00.800460 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-11 01:39:00.800466 | orchestrator | + content_md5 = (known after apply) 2026-04-11 01:39:00.800472 | orchestrator | + content_sha1 = (known after apply) 2026-04-11 01:39:00.800478 | orchestrator | + content_sha256 = (known after apply) 2026-04-11 01:39:00.800485 | orchestrator | + content_sha512 = (known after apply) 2026-04-11 01:39:00.800491 | orchestrator | + directory_permission = "0777" 2026-04-11 01:39:00.800497 | orchestrator | + file_permission = "0644" 2026-04-11 01:39:00.800509 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-11 01:39:00.800516 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.800522 | orchestrator | } 2026-04-11 01:39:00.800626 | orchestrator | 2026-04-11 01:39:00.800656 | orchestrator | # local_file.inventory will be created 2026-04-11 01:39:00.800664 | orchestrator | + resource "local_file" "inventory" { 2026-04-11 01:39:00.800670 | orchestrator | + content = (known after apply) 2026-04-11 01:39:00.800676 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-11 01:39:00.800683 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-11 01:39:00.800689 | orchestrator | + content_md5 = (known after apply) 2026-04-11 01:39:00.800695 | orchestrator | + content_sha1 = (known after apply) 2026-04-11 01:39:00.800702 | orchestrator | + content_sha256 = (known after apply) 2026-04-11 01:39:00.800708 | orchestrator | + content_sha512 = (known after apply) 2026-04-11 01:39:00.800714 | orchestrator | + directory_permission = "0777" 2026-04-11 01:39:00.800720 | orchestrator | + file_permission = "0644" 2026-04-11 01:39:00.800727 | orchestrator | + filename = "inventory.ci" 2026-04-11 01:39:00.800733 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.800739 | orchestrator | } 2026-04-11 01:39:00.800849 | orchestrator | 2026-04-11 01:39:00.800868 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-11 01:39:00.800875 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-11 01:39:00.800882 | orchestrator | + content = (sensitive value) 2026-04-11 01:39:00.800888 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-11 01:39:00.800894 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-11 01:39:00.800900 | orchestrator | + content_md5 = (known after apply) 2026-04-11 01:39:00.800907 | orchestrator | + content_sha1 = (known after apply) 2026-04-11 01:39:00.800913 | orchestrator | + content_sha256 = (known after apply) 2026-04-11 01:39:00.800919 | orchestrator | + content_sha512 = (known after apply) 2026-04-11 01:39:00.800925 | orchestrator | + directory_permission = "0700" 2026-04-11 01:39:00.800931 | orchestrator | + file_permission = "0600" 2026-04-11 01:39:00.800938 | orchestrator | + filename = ".id_rsa.ci" 2026-04-11 01:39:00.800944 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.800950 | orchestrator | } 2026-04-11 01:39:00.800981 | orchestrator | 2026-04-11 01:39:00.800999 | orchestrator | # null_resource.node_semaphore will be created 2026-04-11 01:39:00.801006 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-11 01:39:00.801012 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.801019 | orchestrator | } 2026-04-11 01:39:00.801116 | orchestrator | 2026-04-11 01:39:00.801135 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-11 01:39:00.801142 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-11 01:39:00.801149 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.801155 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.801161 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.801167 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.801174 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.801180 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-11 01:39:00.801186 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.801214 | orchestrator | + size = 80 2026-04-11 01:39:00.801222 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.801228 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.801234 | orchestrator | } 2026-04-11 01:39:00.801336 | orchestrator | 2026-04-11 01:39:00.801354 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-11 01:39:00.801362 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 01:39:00.801368 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.801374 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.801381 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.801393 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.801400 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.801406 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-11 01:39:00.801412 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.801418 | orchestrator | + size = 80 2026-04-11 01:39:00.801425 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.801431 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.801437 | orchestrator | } 2026-04-11 01:39:00.801533 | orchestrator | 2026-04-11 01:39:00.801552 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-11 01:39:00.801559 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 01:39:00.801567 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.801576 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.801587 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.801597 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.801607 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.801618 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-11 01:39:00.801624 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.801630 | orchestrator | + size = 80 2026-04-11 01:39:00.801637 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.801643 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.801649 | orchestrator | } 2026-04-11 01:39:00.801750 | orchestrator | 2026-04-11 01:39:00.801768 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-11 01:39:00.801775 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 01:39:00.801781 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.801788 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.801794 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.801800 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.801806 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.801812 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-11 01:39:00.801818 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.801825 | orchestrator | + size = 80 2026-04-11 01:39:00.801831 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.801837 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.801843 | orchestrator | } 2026-04-11 01:39:00.801995 | orchestrator | 2026-04-11 01:39:00.802047 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-11 01:39:00.802057 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 01:39:00.802063 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.802070 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.802076 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.802082 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.802088 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.802101 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-11 01:39:00.802107 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.802114 | orchestrator | + size = 80 2026-04-11 01:39:00.802120 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.802126 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.802132 | orchestrator | } 2026-04-11 01:39:00.802277 | orchestrator | 2026-04-11 01:39:00.802304 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-11 01:39:00.802312 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 01:39:00.802318 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.802324 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.802330 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.802345 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.802351 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.802357 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-11 01:39:00.802363 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.802369 | orchestrator | + size = 80 2026-04-11 01:39:00.802376 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.802382 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.802388 | orchestrator | } 2026-04-11 01:39:00.802497 | orchestrator | 2026-04-11 01:39:00.802517 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-11 01:39:00.802525 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-11 01:39:00.802531 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.802537 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.802543 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.802550 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.802556 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.802562 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-11 01:39:00.802568 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.802575 | orchestrator | + size = 80 2026-04-11 01:39:00.802581 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.802587 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.802593 | orchestrator | } 2026-04-11 01:39:00.802688 | orchestrator | 2026-04-11 01:39:00.802708 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-11 01:39:00.802717 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.802723 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.802731 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.802737 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.802744 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.802751 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-11 01:39:00.802758 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.802764 | orchestrator | + size = 20 2026-04-11 01:39:00.802771 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.802778 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.802784 | orchestrator | } 2026-04-11 01:39:00.802881 | orchestrator | 2026-04-11 01:39:00.802900 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-11 01:39:00.802908 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.802915 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.802922 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.802929 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.802935 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.802942 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-11 01:39:00.802948 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.802955 | orchestrator | + size = 20 2026-04-11 01:39:00.802962 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.802968 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.802975 | orchestrator | } 2026-04-11 01:39:00.803071 | orchestrator | 2026-04-11 01:39:00.803091 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-11 01:39:00.803098 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.803105 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.803112 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.803118 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.803125 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.803132 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-11 01:39:00.803138 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.803152 | orchestrator | + size = 20 2026-04-11 01:39:00.803158 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.803165 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.803172 | orchestrator | } 2026-04-11 01:39:00.803286 | orchestrator | 2026-04-11 01:39:00.803308 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-11 01:39:00.803316 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.803323 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.803330 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.803336 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.803343 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.803349 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-11 01:39:00.803356 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.803363 | orchestrator | + size = 20 2026-04-11 01:39:00.803370 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.803376 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.803383 | orchestrator | } 2026-04-11 01:39:00.803478 | orchestrator | 2026-04-11 01:39:00.803497 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-11 01:39:00.803505 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.803512 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.803518 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.803525 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.803531 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.803538 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-11 01:39:00.803545 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.803557 | orchestrator | + size = 20 2026-04-11 01:39:00.803564 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.803571 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.803578 | orchestrator | } 2026-04-11 01:39:00.803668 | orchestrator | 2026-04-11 01:39:00.803688 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-11 01:39:00.803696 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.803703 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.803709 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.803716 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.803723 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.803729 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-11 01:39:00.803736 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.803760 | orchestrator | + size = 20 2026-04-11 01:39:00.803767 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.803773 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.803780 | orchestrator | } 2026-04-11 01:39:00.803998 | orchestrator | 2026-04-11 01:39:00.804021 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-11 01:39:00.804029 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.804035 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.804042 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.804063 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.804070 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.804076 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-11 01:39:00.804083 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.804090 | orchestrator | + size = 20 2026-04-11 01:39:00.804096 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.804103 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.804110 | orchestrator | } 2026-04-11 01:39:00.804260 | orchestrator | 2026-04-11 01:39:00.804282 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-11 01:39:00.804303 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.804317 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.804325 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.804331 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.804338 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.804345 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-11 01:39:00.804351 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.804358 | orchestrator | + size = 20 2026-04-11 01:39:00.804365 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.804372 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.804379 | orchestrator | } 2026-04-11 01:39:00.804571 | orchestrator | 2026-04-11 01:39:00.804605 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-11 01:39:00.804614 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-11 01:39:00.804620 | orchestrator | + attachment = (known after apply) 2026-04-11 01:39:00.804627 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.804634 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.804641 | orchestrator | + metadata = (known after apply) 2026-04-11 01:39:00.804647 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-11 01:39:00.804654 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.804674 | orchestrator | + size = 20 2026-04-11 01:39:00.804681 | orchestrator | + volume_retype_policy = "never" 2026-04-11 01:39:00.804688 | orchestrator | + volume_type = "ssd" 2026-04-11 01:39:00.804694 | orchestrator | } 2026-04-11 01:39:00.805126 | orchestrator | 2026-04-11 01:39:00.805149 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-11 01:39:00.805157 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-11 01:39:00.805164 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 01:39:00.805171 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 01:39:00.805190 | orchestrator | + all_metadata = (known after apply) 2026-04-11 01:39:00.805215 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.805222 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.805229 | orchestrator | + config_drive = true 2026-04-11 01:39:00.805236 | orchestrator | + created = (known after apply) 2026-04-11 01:39:00.805242 | orchestrator | + flavor_id = (known after apply) 2026-04-11 01:39:00.805249 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-11 01:39:00.805256 | orchestrator | + force_delete = false 2026-04-11 01:39:00.805262 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 01:39:00.805269 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.805276 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.805282 | orchestrator | + image_name = (known after apply) 2026-04-11 01:39:00.805289 | orchestrator | + key_pair = "testbed" 2026-04-11 01:39:00.805295 | orchestrator | + name = "testbed-manager" 2026-04-11 01:39:00.805302 | orchestrator | + power_state = "active" 2026-04-11 01:39:00.805309 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.805315 | orchestrator | + security_groups = (known after apply) 2026-04-11 01:39:00.805322 | orchestrator | + stop_before_destroy = false 2026-04-11 01:39:00.805329 | orchestrator | + updated = (known after apply) 2026-04-11 01:39:00.805335 | orchestrator | + user_data = (sensitive value) 2026-04-11 01:39:00.805342 | orchestrator | 2026-04-11 01:39:00.805349 | orchestrator | + block_device { 2026-04-11 01:39:00.805355 | orchestrator | + boot_index = 0 2026-04-11 01:39:00.805362 | orchestrator | + delete_on_termination = false 2026-04-11 01:39:00.805375 | orchestrator | + destination_type = "volume" 2026-04-11 01:39:00.805382 | orchestrator | + multiattach = false 2026-04-11 01:39:00.805389 | orchestrator | + source_type = "volume" 2026-04-11 01:39:00.805395 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.805409 | orchestrator | } 2026-04-11 01:39:00.805416 | orchestrator | 2026-04-11 01:39:00.805422 | orchestrator | + network { 2026-04-11 01:39:00.805429 | orchestrator | + access_network = false 2026-04-11 01:39:00.805436 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 01:39:00.805442 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 01:39:00.805449 | orchestrator | + mac = (known after apply) 2026-04-11 01:39:00.805455 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.805462 | orchestrator | + port = (known after apply) 2026-04-11 01:39:00.805468 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.805475 | orchestrator | } 2026-04-11 01:39:00.805482 | orchestrator | } 2026-04-11 01:39:00.805881 | orchestrator | 2026-04-11 01:39:00.805925 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-11 01:39:00.805933 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 01:39:00.805940 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 01:39:00.805947 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 01:39:00.805954 | orchestrator | + all_metadata = (known after apply) 2026-04-11 01:39:00.805960 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.805967 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.805973 | orchestrator | + config_drive = true 2026-04-11 01:39:00.805993 | orchestrator | + created = (known after apply) 2026-04-11 01:39:00.806000 | orchestrator | + flavor_id = (known after apply) 2026-04-11 01:39:00.806006 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 01:39:00.806030 | orchestrator | + force_delete = false 2026-04-11 01:39:00.806038 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 01:39:00.806045 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.806052 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.806059 | orchestrator | + image_name = (known after apply) 2026-04-11 01:39:00.806065 | orchestrator | + key_pair = "testbed" 2026-04-11 01:39:00.806072 | orchestrator | + name = "testbed-node-0" 2026-04-11 01:39:00.806079 | orchestrator | + power_state = "active" 2026-04-11 01:39:00.806085 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.806092 | orchestrator | + security_groups = (known after apply) 2026-04-11 01:39:00.806099 | orchestrator | + stop_before_destroy = false 2026-04-11 01:39:00.806105 | orchestrator | + updated = (known after apply) 2026-04-11 01:39:00.806112 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 01:39:00.806119 | orchestrator | 2026-04-11 01:39:00.806125 | orchestrator | + block_device { 2026-04-11 01:39:00.806132 | orchestrator | + boot_index = 0 2026-04-11 01:39:00.806139 | orchestrator | + delete_on_termination = false 2026-04-11 01:39:00.806145 | orchestrator | + destination_type = "volume" 2026-04-11 01:39:00.806152 | orchestrator | + multiattach = false 2026-04-11 01:39:00.806158 | orchestrator | + source_type = "volume" 2026-04-11 01:39:00.806165 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.806172 | orchestrator | } 2026-04-11 01:39:00.806179 | orchestrator | 2026-04-11 01:39:00.806185 | orchestrator | + network { 2026-04-11 01:39:00.806212 | orchestrator | + access_network = false 2026-04-11 01:39:00.806224 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 01:39:00.806235 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 01:39:00.806246 | orchestrator | + mac = (known after apply) 2026-04-11 01:39:00.806257 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.806268 | orchestrator | + port = (known after apply) 2026-04-11 01:39:00.806280 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.806290 | orchestrator | } 2026-04-11 01:39:00.806297 | orchestrator | } 2026-04-11 01:39:00.806748 | orchestrator | 2026-04-11 01:39:00.806771 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-11 01:39:00.806780 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 01:39:00.806787 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 01:39:00.806822 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 01:39:00.806829 | orchestrator | + all_metadata = (known after apply) 2026-04-11 01:39:00.806836 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.806843 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.806850 | orchestrator | + config_drive = true 2026-04-11 01:39:00.806857 | orchestrator | + created = (known after apply) 2026-04-11 01:39:00.806865 | orchestrator | + flavor_id = (known after apply) 2026-04-11 01:39:00.806872 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 01:39:00.806879 | orchestrator | + force_delete = false 2026-04-11 01:39:00.806886 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 01:39:00.806893 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.806900 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.806907 | orchestrator | + image_name = (known after apply) 2026-04-11 01:39:00.806914 | orchestrator | + key_pair = "testbed" 2026-04-11 01:39:00.806921 | orchestrator | + name = "testbed-node-1" 2026-04-11 01:39:00.806928 | orchestrator | + power_state = "active" 2026-04-11 01:39:00.806935 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.806943 | orchestrator | + security_groups = (known after apply) 2026-04-11 01:39:00.806950 | orchestrator | + stop_before_destroy = false 2026-04-11 01:39:00.806957 | orchestrator | + updated = (known after apply) 2026-04-11 01:39:00.806964 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 01:39:00.806971 | orchestrator | 2026-04-11 01:39:00.806978 | orchestrator | + block_device { 2026-04-11 01:39:00.806985 | orchestrator | + boot_index = 0 2026-04-11 01:39:00.806992 | orchestrator | + delete_on_termination = false 2026-04-11 01:39:00.806999 | orchestrator | + destination_type = "volume" 2026-04-11 01:39:00.807006 | orchestrator | + multiattach = false 2026-04-11 01:39:00.807013 | orchestrator | + source_type = "volume" 2026-04-11 01:39:00.807021 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.807028 | orchestrator | } 2026-04-11 01:39:00.807035 | orchestrator | 2026-04-11 01:39:00.807042 | orchestrator | + network { 2026-04-11 01:39:00.807049 | orchestrator | + access_network = false 2026-04-11 01:39:00.807056 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 01:39:00.807063 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 01:39:00.807070 | orchestrator | + mac = (known after apply) 2026-04-11 01:39:00.807078 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.807085 | orchestrator | + port = (known after apply) 2026-04-11 01:39:00.807092 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.807099 | orchestrator | } 2026-04-11 01:39:00.807106 | orchestrator | } 2026-04-11 01:39:00.807571 | orchestrator | 2026-04-11 01:39:00.807597 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-11 01:39:00.807605 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 01:39:00.807613 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 01:39:00.807633 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 01:39:00.807643 | orchestrator | + all_metadata = (known after apply) 2026-04-11 01:39:00.807650 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.807664 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.807671 | orchestrator | + config_drive = true 2026-04-11 01:39:00.807679 | orchestrator | + created = (known after apply) 2026-04-11 01:39:00.807686 | orchestrator | + flavor_id = (known after apply) 2026-04-11 01:39:00.807694 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 01:39:00.807701 | orchestrator | + force_delete = false 2026-04-11 01:39:00.807708 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 01:39:00.807715 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.807722 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.807736 | orchestrator | + image_name = (known after apply) 2026-04-11 01:39:00.807744 | orchestrator | + key_pair = "testbed" 2026-04-11 01:39:00.807751 | orchestrator | + name = "testbed-node-2" 2026-04-11 01:39:00.807758 | orchestrator | + power_state = "active" 2026-04-11 01:39:00.807766 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.807773 | orchestrator | + security_groups = (known after apply) 2026-04-11 01:39:00.807780 | orchestrator | + stop_before_destroy = false 2026-04-11 01:39:00.807787 | orchestrator | + updated = (known after apply) 2026-04-11 01:39:00.807794 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 01:39:00.807802 | orchestrator | 2026-04-11 01:39:00.807809 | orchestrator | + block_device { 2026-04-11 01:39:00.807816 | orchestrator | + boot_index = 0 2026-04-11 01:39:00.807824 | orchestrator | + delete_on_termination = false 2026-04-11 01:39:00.807831 | orchestrator | + destination_type = "volume" 2026-04-11 01:39:00.807838 | orchestrator | + multiattach = false 2026-04-11 01:39:00.807845 | orchestrator | + source_type = "volume" 2026-04-11 01:39:00.807852 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.807860 | orchestrator | } 2026-04-11 01:39:00.807867 | orchestrator | 2026-04-11 01:39:00.807874 | orchestrator | + network { 2026-04-11 01:39:00.807882 | orchestrator | + access_network = false 2026-04-11 01:39:00.807889 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 01:39:00.807896 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 01:39:00.807903 | orchestrator | + mac = (known after apply) 2026-04-11 01:39:00.807911 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.807918 | orchestrator | + port = (known after apply) 2026-04-11 01:39:00.807925 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.807933 | orchestrator | } 2026-04-11 01:39:00.807940 | orchestrator | } 2026-04-11 01:39:00.808410 | orchestrator | 2026-04-11 01:39:00.808435 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-11 01:39:00.808444 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 01:39:00.808465 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 01:39:00.808473 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 01:39:00.808480 | orchestrator | + all_metadata = (known after apply) 2026-04-11 01:39:00.808488 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.808495 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.808502 | orchestrator | + config_drive = true 2026-04-11 01:39:00.808509 | orchestrator | + created = (known after apply) 2026-04-11 01:39:00.808516 | orchestrator | + flavor_id = (known after apply) 2026-04-11 01:39:00.808523 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 01:39:00.808530 | orchestrator | + force_delete = false 2026-04-11 01:39:00.808538 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 01:39:00.808545 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.808552 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.808560 | orchestrator | + image_name = (known after apply) 2026-04-11 01:39:00.808567 | orchestrator | + key_pair = "testbed" 2026-04-11 01:39:00.808574 | orchestrator | + name = "testbed-node-3" 2026-04-11 01:39:00.808582 | orchestrator | + power_state = "active" 2026-04-11 01:39:00.808589 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.808596 | orchestrator | + security_groups = (known after apply) 2026-04-11 01:39:00.808603 | orchestrator | + stop_before_destroy = false 2026-04-11 01:39:00.808610 | orchestrator | + updated = (known after apply) 2026-04-11 01:39:00.808618 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 01:39:00.808625 | orchestrator | 2026-04-11 01:39:00.808632 | orchestrator | + block_device { 2026-04-11 01:39:00.808644 | orchestrator | + boot_index = 0 2026-04-11 01:39:00.808651 | orchestrator | + delete_on_termination = false 2026-04-11 01:39:00.808659 | orchestrator | + destination_type = "volume" 2026-04-11 01:39:00.808673 | orchestrator | + multiattach = false 2026-04-11 01:39:00.808680 | orchestrator | + source_type = "volume" 2026-04-11 01:39:00.808688 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.808695 | orchestrator | } 2026-04-11 01:39:00.808702 | orchestrator | 2026-04-11 01:39:00.808709 | orchestrator | + network { 2026-04-11 01:39:00.808716 | orchestrator | + access_network = false 2026-04-11 01:39:00.808723 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 01:39:00.808731 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 01:39:00.808738 | orchestrator | + mac = (known after apply) 2026-04-11 01:39:00.808745 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.808752 | orchestrator | + port = (known after apply) 2026-04-11 01:39:00.808759 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.808766 | orchestrator | } 2026-04-11 01:39:00.808774 | orchestrator | } 2026-04-11 01:39:00.809229 | orchestrator | 2026-04-11 01:39:00.809253 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-11 01:39:00.809262 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 01:39:00.809269 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 01:39:00.809276 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 01:39:00.809298 | orchestrator | + all_metadata = (known after apply) 2026-04-11 01:39:00.809305 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.809312 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.809319 | orchestrator | + config_drive = true 2026-04-11 01:39:00.809327 | orchestrator | + created = (known after apply) 2026-04-11 01:39:00.809334 | orchestrator | + flavor_id = (known after apply) 2026-04-11 01:39:00.809341 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 01:39:00.809348 | orchestrator | + force_delete = false 2026-04-11 01:39:00.809355 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 01:39:00.809363 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.809370 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.809377 | orchestrator | + image_name = (known after apply) 2026-04-11 01:39:00.809384 | orchestrator | + key_pair = "testbed" 2026-04-11 01:39:00.809392 | orchestrator | + name = "testbed-node-4" 2026-04-11 01:39:00.809399 | orchestrator | + power_state = "active" 2026-04-11 01:39:00.809406 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.809413 | orchestrator | + security_groups = (known after apply) 2026-04-11 01:39:00.809420 | orchestrator | + stop_before_destroy = false 2026-04-11 01:39:00.809427 | orchestrator | + updated = (known after apply) 2026-04-11 01:39:00.809435 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 01:39:00.809442 | orchestrator | 2026-04-11 01:39:00.809449 | orchestrator | + block_device { 2026-04-11 01:39:00.809457 | orchestrator | + boot_index = 0 2026-04-11 01:39:00.809464 | orchestrator | + delete_on_termination = false 2026-04-11 01:39:00.809471 | orchestrator | + destination_type = "volume" 2026-04-11 01:39:00.809478 | orchestrator | + multiattach = false 2026-04-11 01:39:00.809485 | orchestrator | + source_type = "volume" 2026-04-11 01:39:00.809492 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.809500 | orchestrator | } 2026-04-11 01:39:00.809507 | orchestrator | 2026-04-11 01:39:00.809514 | orchestrator | + network { 2026-04-11 01:39:00.809521 | orchestrator | + access_network = false 2026-04-11 01:39:00.809528 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 01:39:00.809536 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 01:39:00.809543 | orchestrator | + mac = (known after apply) 2026-04-11 01:39:00.809550 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.809557 | orchestrator | + port = (known after apply) 2026-04-11 01:39:00.809564 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.809572 | orchestrator | } 2026-04-11 01:39:00.809579 | orchestrator | } 2026-04-11 01:39:00.810040 | orchestrator | 2026-04-11 01:39:00.810065 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-11 01:39:00.810073 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-11 01:39:00.810094 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-11 01:39:00.810102 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-11 01:39:00.810109 | orchestrator | + all_metadata = (known after apply) 2026-04-11 01:39:00.810117 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.810124 | orchestrator | + availability_zone = "nova" 2026-04-11 01:39:00.810131 | orchestrator | + config_drive = true 2026-04-11 01:39:00.810138 | orchestrator | + created = (known after apply) 2026-04-11 01:39:00.810146 | orchestrator | + flavor_id = (known after apply) 2026-04-11 01:39:00.810153 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-11 01:39:00.810160 | orchestrator | + force_delete = false 2026-04-11 01:39:00.810172 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-11 01:39:00.810180 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.810187 | orchestrator | + image_id = (known after apply) 2026-04-11 01:39:00.810213 | orchestrator | + image_name = (known after apply) 2026-04-11 01:39:00.810225 | orchestrator | + key_pair = "testbed" 2026-04-11 01:39:00.810236 | orchestrator | + name = "testbed-node-5" 2026-04-11 01:39:00.810247 | orchestrator | + power_state = "active" 2026-04-11 01:39:00.810260 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.810272 | orchestrator | + security_groups = (known after apply) 2026-04-11 01:39:00.810283 | orchestrator | + stop_before_destroy = false 2026-04-11 01:39:00.810292 | orchestrator | + updated = (known after apply) 2026-04-11 01:39:00.810299 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-11 01:39:00.810307 | orchestrator | 2026-04-11 01:39:00.810314 | orchestrator | + block_device { 2026-04-11 01:39:00.810321 | orchestrator | + boot_index = 0 2026-04-11 01:39:00.810328 | orchestrator | + delete_on_termination = false 2026-04-11 01:39:00.810335 | orchestrator | + destination_type = "volume" 2026-04-11 01:39:00.810343 | orchestrator | + multiattach = false 2026-04-11 01:39:00.810350 | orchestrator | + source_type = "volume" 2026-04-11 01:39:00.810357 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.810364 | orchestrator | } 2026-04-11 01:39:00.810371 | orchestrator | 2026-04-11 01:39:00.810379 | orchestrator | + network { 2026-04-11 01:39:00.810386 | orchestrator | + access_network = false 2026-04-11 01:39:00.810393 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-11 01:39:00.810400 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-11 01:39:00.810407 | orchestrator | + mac = (known after apply) 2026-04-11 01:39:00.810415 | orchestrator | + name = (known after apply) 2026-04-11 01:39:00.810422 | orchestrator | + port = (known after apply) 2026-04-11 01:39:00.810430 | orchestrator | + uuid = (known after apply) 2026-04-11 01:39:00.810437 | orchestrator | } 2026-04-11 01:39:00.810444 | orchestrator | } 2026-04-11 01:39:00.810564 | orchestrator | 2026-04-11 01:39:00.810587 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-11 01:39:00.810597 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-11 01:39:00.810605 | orchestrator | + fingerprint = (known after apply) 2026-04-11 01:39:00.810613 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.810621 | orchestrator | + name = "testbed" 2026-04-11 01:39:00.810643 | orchestrator | + private_key = (sensitive value) 2026-04-11 01:39:00.810651 | orchestrator | + public_key = (known after apply) 2026-04-11 01:39:00.810659 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.810667 | orchestrator | + user_id = (known after apply) 2026-04-11 01:39:00.810675 | orchestrator | } 2026-04-11 01:39:00.810760 | orchestrator | 2026-04-11 01:39:00.810784 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-11 01:39:00.810794 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.810810 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.810833 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.810841 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.810849 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.810857 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.810865 | orchestrator | } 2026-04-11 01:39:00.810946 | orchestrator | 2026-04-11 01:39:00.810969 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-11 01:39:00.810978 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.810986 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.810994 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.811002 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.811010 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.811018 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.811025 | orchestrator | } 2026-04-11 01:39:00.811118 | orchestrator | 2026-04-11 01:39:00.811142 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-11 01:39:00.811152 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.811159 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.811168 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.811175 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.811183 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.811217 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.811228 | orchestrator | } 2026-04-11 01:39:00.811302 | orchestrator | 2026-04-11 01:39:00.811325 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-11 01:39:00.811334 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.811342 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.811350 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.811358 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.811365 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.811373 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.811381 | orchestrator | } 2026-04-11 01:39:00.811452 | orchestrator | 2026-04-11 01:39:00.811490 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-11 01:39:00.811499 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.811507 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.811515 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.811523 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.811537 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.811545 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.811553 | orchestrator | } 2026-04-11 01:39:00.811633 | orchestrator | 2026-04-11 01:39:00.811656 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-11 01:39:00.811665 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.811673 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.811681 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.811689 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.811697 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.811705 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.811712 | orchestrator | } 2026-04-11 01:39:00.811781 | orchestrator | 2026-04-11 01:39:00.811804 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-11 01:39:00.811813 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.811821 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.811829 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.811836 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.811844 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.811859 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.811867 | orchestrator | } 2026-04-11 01:39:00.811931 | orchestrator | 2026-04-11 01:39:00.811953 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-11 01:39:00.811962 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.811970 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.811978 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.811986 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.811994 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.812002 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.812009 | orchestrator | } 2026-04-11 01:39:00.812079 | orchestrator | 2026-04-11 01:39:00.812102 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-11 01:39:00.812111 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-11 01:39:00.812119 | orchestrator | + device = (known after apply) 2026-04-11 01:39:00.812127 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.812135 | orchestrator | + instance_id = (known after apply) 2026-04-11 01:39:00.812143 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.812150 | orchestrator | + volume_id = (known after apply) 2026-04-11 01:39:00.812158 | orchestrator | } 2026-04-11 01:39:00.812250 | orchestrator | 2026-04-11 01:39:00.812302 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-11 01:39:00.812313 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-11 01:39:00.812321 | orchestrator | + fixed_ip = (known after apply) 2026-04-11 01:39:00.812329 | orchestrator | + floating_ip = (known after apply) 2026-04-11 01:39:00.812337 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.812345 | orchestrator | + port_id = (known after apply) 2026-04-11 01:39:00.812352 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.812360 | orchestrator | } 2026-04-11 01:39:00.812477 | orchestrator | 2026-04-11 01:39:00.812501 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-11 01:39:00.812511 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-11 01:39:00.812519 | orchestrator | + address = (known after apply) 2026-04-11 01:39:00.812526 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.812534 | orchestrator | + dns_domain = (known after apply) 2026-04-11 01:39:00.812542 | orchestrator | + dns_name = (known after apply) 2026-04-11 01:39:00.812550 | orchestrator | + fixed_ip = (known after apply) 2026-04-11 01:39:00.812558 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.812566 | orchestrator | + pool = "public" 2026-04-11 01:39:00.812575 | orchestrator | + port_id = (known after apply) 2026-04-11 01:39:00.812582 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.812590 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.812598 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.812606 | orchestrator | } 2026-04-11 01:39:00.812772 | orchestrator | 2026-04-11 01:39:00.812797 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-11 01:39:00.812806 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-11 01:39:00.812814 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.812822 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.812830 | orchestrator | + availability_zone_hints = [ 2026-04-11 01:39:00.812838 | orchestrator | + "nova", 2026-04-11 01:39:00.812846 | orchestrator | ] 2026-04-11 01:39:00.812854 | orchestrator | + dns_domain = (known after apply) 2026-04-11 01:39:00.812862 | orchestrator | + external = (known after apply) 2026-04-11 01:39:00.812870 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.812878 | orchestrator | + mtu = (known after apply) 2026-04-11 01:39:00.812886 | orchestrator | + name = "net-testbed-management" 2026-04-11 01:39:00.812894 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 01:39:00.812910 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 01:39:00.812918 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.812926 | orchestrator | + shared = (known after apply) 2026-04-11 01:39:00.812934 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.812942 | orchestrator | + transparent_vlan = (known after apply) 2026-04-11 01:39:00.812950 | orchestrator | 2026-04-11 01:39:00.812958 | orchestrator | + segments (known after apply) 2026-04-11 01:39:00.812966 | orchestrator | } 2026-04-11 01:39:00.813216 | orchestrator | 2026-04-11 01:39:00.813242 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-11 01:39:00.813252 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-11 01:39:00.813260 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.813268 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 01:39:00.813276 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 01:39:00.813289 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.813298 | orchestrator | + device_id = (known after apply) 2026-04-11 01:39:00.813306 | orchestrator | + device_owner = (known after apply) 2026-04-11 01:39:00.813313 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 01:39:00.813321 | orchestrator | + dns_name = (known after apply) 2026-04-11 01:39:00.813329 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.813337 | orchestrator | + mac_address = (known after apply) 2026-04-11 01:39:00.813345 | orchestrator | + network_id = (known after apply) 2026-04-11 01:39:00.813353 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 01:39:00.813361 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 01:39:00.813368 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.813376 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 01:39:00.813384 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.813392 | orchestrator | 2026-04-11 01:39:00.813400 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.813408 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 01:39:00.813416 | orchestrator | } 2026-04-11 01:39:00.813482 | orchestrator | 2026-04-11 01:39:00.813492 | orchestrator | + binding (known after apply) 2026-04-11 01:39:00.813501 | orchestrator | 2026-04-11 01:39:00.813509 | orchestrator | + fixed_ip { 2026-04-11 01:39:00.813517 | orchestrator | + ip_address = "192.168.16.5" 2026-04-11 01:39:00.813525 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.813533 | orchestrator | } 2026-04-11 01:39:00.813540 | orchestrator | } 2026-04-11 01:39:00.813842 | orchestrator | 2026-04-11 01:39:00.813868 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-11 01:39:00.813877 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 01:39:00.813885 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.813893 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 01:39:00.813901 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 01:39:00.813909 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.813917 | orchestrator | + device_id = (known after apply) 2026-04-11 01:39:00.813925 | orchestrator | + device_owner = (known after apply) 2026-04-11 01:39:00.813933 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 01:39:00.813940 | orchestrator | + dns_name = (known after apply) 2026-04-11 01:39:00.813948 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.813956 | orchestrator | + mac_address = (known after apply) 2026-04-11 01:39:00.813964 | orchestrator | + network_id = (known after apply) 2026-04-11 01:39:00.813971 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 01:39:00.813979 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 01:39:00.813987 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.814003 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 01:39:00.814042 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.814052 | orchestrator | 2026-04-11 01:39:00.814060 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.814068 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 01:39:00.814076 | orchestrator | } 2026-04-11 01:39:00.814084 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.814092 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 01:39:00.814100 | orchestrator | } 2026-04-11 01:39:00.814108 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.814116 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 01:39:00.814123 | orchestrator | } 2026-04-11 01:39:00.814131 | orchestrator | 2026-04-11 01:39:00.814139 | orchestrator | + binding (known after apply) 2026-04-11 01:39:00.814147 | orchestrator | 2026-04-11 01:39:00.814155 | orchestrator | + fixed_ip { 2026-04-11 01:39:00.814163 | orchestrator | + ip_address = "192.168.16.10" 2026-04-11 01:39:00.814171 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.814179 | orchestrator | } 2026-04-11 01:39:00.814186 | orchestrator | } 2026-04-11 01:39:00.814517 | orchestrator | 2026-04-11 01:39:00.814544 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-11 01:39:00.814554 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 01:39:00.814561 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.814569 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 01:39:00.814577 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 01:39:00.814585 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.814593 | orchestrator | + device_id = (known after apply) 2026-04-11 01:39:00.814601 | orchestrator | + device_owner = (known after apply) 2026-04-11 01:39:00.814609 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 01:39:00.814617 | orchestrator | + dns_name = (known after apply) 2026-04-11 01:39:00.814625 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.814632 | orchestrator | + mac_address = (known after apply) 2026-04-11 01:39:00.814640 | orchestrator | + network_id = (known after apply) 2026-04-11 01:39:00.814648 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 01:39:00.814656 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 01:39:00.814664 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.814671 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 01:39:00.814679 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.814687 | orchestrator | 2026-04-11 01:39:00.814695 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.814703 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 01:39:00.814711 | orchestrator | } 2026-04-11 01:39:00.814719 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.814727 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 01:39:00.814734 | orchestrator | } 2026-04-11 01:39:00.814742 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.814750 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 01:39:00.814758 | orchestrator | } 2026-04-11 01:39:00.814766 | orchestrator | 2026-04-11 01:39:00.814773 | orchestrator | + binding (known after apply) 2026-04-11 01:39:00.814781 | orchestrator | 2026-04-11 01:39:00.814789 | orchestrator | + fixed_ip { 2026-04-11 01:39:00.814797 | orchestrator | + ip_address = "192.168.16.11" 2026-04-11 01:39:00.814805 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.814813 | orchestrator | } 2026-04-11 01:39:00.814820 | orchestrator | } 2026-04-11 01:39:00.815091 | orchestrator | 2026-04-11 01:39:00.815115 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-11 01:39:00.815124 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 01:39:00.815132 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.815140 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 01:39:00.815148 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 01:39:00.815156 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.815176 | orchestrator | + device_id = (known after apply) 2026-04-11 01:39:00.815184 | orchestrator | + device_owner = (known after apply) 2026-04-11 01:39:00.815218 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 01:39:00.815229 | orchestrator | + dns_name = (known after apply) 2026-04-11 01:39:00.815248 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.815256 | orchestrator | + mac_address = (known after apply) 2026-04-11 01:39:00.815264 | orchestrator | + network_id = (known after apply) 2026-04-11 01:39:00.815272 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 01:39:00.815280 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 01:39:00.815288 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.815296 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 01:39:00.815304 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.815312 | orchestrator | 2026-04-11 01:39:00.815320 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.815328 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 01:39:00.815336 | orchestrator | } 2026-04-11 01:39:00.815344 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.815352 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 01:39:00.815360 | orchestrator | } 2026-04-11 01:39:00.815368 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.815375 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 01:39:00.815383 | orchestrator | } 2026-04-11 01:39:00.815391 | orchestrator | 2026-04-11 01:39:00.815399 | orchestrator | + binding (known after apply) 2026-04-11 01:39:00.815407 | orchestrator | 2026-04-11 01:39:00.815415 | orchestrator | + fixed_ip { 2026-04-11 01:39:00.815422 | orchestrator | + ip_address = "192.168.16.12" 2026-04-11 01:39:00.815430 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.815438 | orchestrator | } 2026-04-11 01:39:00.815446 | orchestrator | } 2026-04-11 01:39:00.815707 | orchestrator | 2026-04-11 01:39:00.815731 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-11 01:39:00.815740 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 01:39:00.815748 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.815757 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 01:39:00.815764 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 01:39:00.815773 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.815781 | orchestrator | + device_id = (known after apply) 2026-04-11 01:39:00.815789 | orchestrator | + device_owner = (known after apply) 2026-04-11 01:39:00.815796 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 01:39:00.815804 | orchestrator | + dns_name = (known after apply) 2026-04-11 01:39:00.815812 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.815820 | orchestrator | + mac_address = (known after apply) 2026-04-11 01:39:00.815828 | orchestrator | + network_id = (known after apply) 2026-04-11 01:39:00.815836 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 01:39:00.815844 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 01:39:00.815851 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.815859 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 01:39:00.815867 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.815875 | orchestrator | 2026-04-11 01:39:00.815883 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.815891 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 01:39:00.815899 | orchestrator | } 2026-04-11 01:39:00.815907 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.815915 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 01:39:00.815923 | orchestrator | } 2026-04-11 01:39:00.815931 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.815939 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 01:39:00.815947 | orchestrator | } 2026-04-11 01:39:00.815955 | orchestrator | 2026-04-11 01:39:00.815970 | orchestrator | + binding (known after apply) 2026-04-11 01:39:00.815978 | orchestrator | 2026-04-11 01:39:00.815986 | orchestrator | + fixed_ip { 2026-04-11 01:39:00.815994 | orchestrator | + ip_address = "192.168.16.13" 2026-04-11 01:39:00.816002 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.816010 | orchestrator | } 2026-04-11 01:39:00.816018 | orchestrator | } 2026-04-11 01:39:00.816383 | orchestrator | 2026-04-11 01:39:00.816415 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-11 01:39:00.816424 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 01:39:00.816432 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.816440 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 01:39:00.816448 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 01:39:00.816456 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.816464 | orchestrator | + device_id = (known after apply) 2026-04-11 01:39:00.816472 | orchestrator | + device_owner = (known after apply) 2026-04-11 01:39:00.816480 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 01:39:00.816487 | orchestrator | + dns_name = (known after apply) 2026-04-11 01:39:00.816495 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.816503 | orchestrator | + mac_address = (known after apply) 2026-04-11 01:39:00.816511 | orchestrator | + network_id = (known after apply) 2026-04-11 01:39:00.816519 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 01:39:00.816527 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 01:39:00.816535 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.816543 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 01:39:00.816551 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.816561 | orchestrator | 2026-04-11 01:39:00.816569 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.816577 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 01:39:00.816585 | orchestrator | } 2026-04-11 01:39:00.816593 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.816600 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 01:39:00.816608 | orchestrator | } 2026-04-11 01:39:00.816616 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.816624 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 01:39:00.816631 | orchestrator | } 2026-04-11 01:39:00.816639 | orchestrator | 2026-04-11 01:39:00.816647 | orchestrator | + binding (known after apply) 2026-04-11 01:39:00.816655 | orchestrator | 2026-04-11 01:39:00.816663 | orchestrator | + fixed_ip { 2026-04-11 01:39:00.816670 | orchestrator | + ip_address = "192.168.16.14" 2026-04-11 01:39:00.816678 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.816686 | orchestrator | } 2026-04-11 01:39:00.816694 | orchestrator | } 2026-04-11 01:39:00.816950 | orchestrator | 2026-04-11 01:39:00.816974 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-11 01:39:00.816983 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-11 01:39:00.816991 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.816999 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-11 01:39:00.817006 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-11 01:39:00.817014 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.817022 | orchestrator | + device_id = (known after apply) 2026-04-11 01:39:00.817030 | orchestrator | + device_owner = (known after apply) 2026-04-11 01:39:00.817038 | orchestrator | + dns_assignment = (known after apply) 2026-04-11 01:39:00.817046 | orchestrator | + dns_name = (known after apply) 2026-04-11 01:39:00.817053 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.817061 | orchestrator | + mac_address = (known after apply) 2026-04-11 01:39:00.817069 | orchestrator | + network_id = (known after apply) 2026-04-11 01:39:00.817077 | orchestrator | + port_security_enabled = (known after apply) 2026-04-11 01:39:00.817085 | orchestrator | + qos_policy_id = (known after apply) 2026-04-11 01:39:00.817101 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.817109 | orchestrator | + security_group_ids = (known after apply) 2026-04-11 01:39:00.817117 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.817125 | orchestrator | 2026-04-11 01:39:00.817132 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.817140 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-11 01:39:00.817148 | orchestrator | } 2026-04-11 01:39:00.817156 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.817164 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-11 01:39:00.817172 | orchestrator | } 2026-04-11 01:39:00.817179 | orchestrator | + allowed_address_pairs { 2026-04-11 01:39:00.817187 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-11 01:39:00.817218 | orchestrator | } 2026-04-11 01:39:00.817227 | orchestrator | 2026-04-11 01:39:00.817240 | orchestrator | + binding (known after apply) 2026-04-11 01:39:00.817249 | orchestrator | 2026-04-11 01:39:00.817257 | orchestrator | + fixed_ip { 2026-04-11 01:39:00.817265 | orchestrator | + ip_address = "192.168.16.15" 2026-04-11 01:39:00.817273 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.817280 | orchestrator | } 2026-04-11 01:39:00.817288 | orchestrator | } 2026-04-11 01:39:00.817387 | orchestrator | 2026-04-11 01:39:00.817410 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-11 01:39:00.817419 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-11 01:39:00.817427 | orchestrator | + force_destroy = false 2026-04-11 01:39:00.817435 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.817443 | orchestrator | + port_id = (known after apply) 2026-04-11 01:39:00.817451 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.817459 | orchestrator | + router_id = (known after apply) 2026-04-11 01:39:00.817466 | orchestrator | + subnet_id = (known after apply) 2026-04-11 01:39:00.817474 | orchestrator | } 2026-04-11 01:39:00.817638 | orchestrator | 2026-04-11 01:39:00.817663 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-11 01:39:00.817672 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-11 01:39:00.817680 | orchestrator | + admin_state_up = (known after apply) 2026-04-11 01:39:00.817688 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.817696 | orchestrator | + availability_zone_hints = [ 2026-04-11 01:39:00.817704 | orchestrator | + "nova", 2026-04-11 01:39:00.817712 | orchestrator | ] 2026-04-11 01:39:00.817720 | orchestrator | + distributed = (known after apply) 2026-04-11 01:39:00.817727 | orchestrator | + enable_snat = (known after apply) 2026-04-11 01:39:00.817735 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-11 01:39:00.817743 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-11 01:39:00.817751 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.817759 | orchestrator | + name = "testbed" 2026-04-11 01:39:00.817767 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.817775 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.817782 | orchestrator | 2026-04-11 01:39:00.817790 | orchestrator | + external_fixed_ip (known after apply) 2026-04-11 01:39:00.817798 | orchestrator | } 2026-04-11 01:39:00.817952 | orchestrator | 2026-04-11 01:39:00.817975 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-11 01:39:00.817984 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-11 01:39:00.817992 | orchestrator | + description = "ssh" 2026-04-11 01:39:00.818000 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.818008 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.818036 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.818046 | orchestrator | + port_range_max = 22 2026-04-11 01:39:00.818054 | orchestrator | + port_range_min = 22 2026-04-11 01:39:00.818061 | orchestrator | + protocol = "tcp" 2026-04-11 01:39:00.818069 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.818088 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.818096 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.818104 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 01:39:00.818111 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.818119 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.818127 | orchestrator | } 2026-04-11 01:39:00.818335 | orchestrator | 2026-04-11 01:39:00.818364 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-11 01:39:00.818373 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-11 01:39:00.818381 | orchestrator | + description = "wireguard" 2026-04-11 01:39:00.818389 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.818397 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.818405 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.818413 | orchestrator | + port_range_max = 51820 2026-04-11 01:39:00.818421 | orchestrator | + port_range_min = 51820 2026-04-11 01:39:00.818428 | orchestrator | + protocol = "udp" 2026-04-11 01:39:00.818436 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.818444 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.818452 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.818460 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 01:39:00.818468 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.818476 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.818484 | orchestrator | } 2026-04-11 01:39:00.818605 | orchestrator | 2026-04-11 01:39:00.818629 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-11 01:39:00.818638 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-11 01:39:00.818646 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.818654 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.818662 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.818670 | orchestrator | + protocol = "tcp" 2026-04-11 01:39:00.818677 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.818685 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.818693 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.818701 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-11 01:39:00.818709 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.818717 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.818725 | orchestrator | } 2026-04-11 01:39:00.818841 | orchestrator | 2026-04-11 01:39:00.818864 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-11 01:39:00.818873 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-11 01:39:00.818881 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.818889 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.818896 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.818904 | orchestrator | + protocol = "udp" 2026-04-11 01:39:00.818912 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.818920 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.818928 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.818936 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-11 01:39:00.818944 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.818952 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.818959 | orchestrator | } 2026-04-11 01:39:00.819077 | orchestrator | 2026-04-11 01:39:00.819100 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-11 01:39:00.819117 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-11 01:39:00.819125 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.819133 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.819141 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.819149 | orchestrator | + protocol = "icmp" 2026-04-11 01:39:00.819157 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.819165 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.819173 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.819181 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 01:39:00.819189 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.819218 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.819227 | orchestrator | } 2026-04-11 01:39:00.819355 | orchestrator | 2026-04-11 01:39:00.819378 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-11 01:39:00.819387 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-11 01:39:00.819396 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.819404 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.819412 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.819420 | orchestrator | + protocol = "tcp" 2026-04-11 01:39:00.819428 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.819436 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.819451 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.819460 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 01:39:00.819467 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.819476 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.819484 | orchestrator | } 2026-04-11 01:39:00.819605 | orchestrator | 2026-04-11 01:39:00.819628 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-11 01:39:00.819638 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-11 01:39:00.819646 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.819654 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.819662 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.819670 | orchestrator | + protocol = "udp" 2026-04-11 01:39:00.819678 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.819686 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.819694 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.819702 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 01:39:00.819710 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.819718 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.819726 | orchestrator | } 2026-04-11 01:39:00.819845 | orchestrator | 2026-04-11 01:39:00.819868 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-11 01:39:00.819877 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-11 01:39:00.819885 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.819898 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.819906 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.819914 | orchestrator | + protocol = "icmp" 2026-04-11 01:39:00.819922 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.819930 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.819938 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.819946 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 01:39:00.819954 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.819962 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.819977 | orchestrator | } 2026-04-11 01:39:00.820104 | orchestrator | 2026-04-11 01:39:00.820127 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-11 01:39:00.820137 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-11 01:39:00.820145 | orchestrator | + description = "vrrp" 2026-04-11 01:39:00.820153 | orchestrator | + direction = "ingress" 2026-04-11 01:39:00.820161 | orchestrator | + ethertype = "IPv4" 2026-04-11 01:39:00.820169 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.820177 | orchestrator | + protocol = "112" 2026-04-11 01:39:00.820185 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.820217 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-11 01:39:00.820227 | orchestrator | + remote_group_id = (known after apply) 2026-04-11 01:39:00.820235 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-11 01:39:00.820243 | orchestrator | + security_group_id = (known after apply) 2026-04-11 01:39:00.820252 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.820260 | orchestrator | } 2026-04-11 01:39:00.820352 | orchestrator | 2026-04-11 01:39:00.820375 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-11 01:39:00.820384 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-11 01:39:00.820392 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.820400 | orchestrator | + description = "management security group" 2026-04-11 01:39:00.820408 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.820416 | orchestrator | + name = "testbed-management" 2026-04-11 01:39:00.820424 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.820432 | orchestrator | + stateful = (known after apply) 2026-04-11 01:39:00.820440 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.820448 | orchestrator | } 2026-04-11 01:39:00.820533 | orchestrator | 2026-04-11 01:39:00.820556 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-11 01:39:00.820565 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-11 01:39:00.820573 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.820581 | orchestrator | + description = "node security group" 2026-04-11 01:39:00.820589 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.820597 | orchestrator | + name = "testbed-node" 2026-04-11 01:39:00.820605 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.820613 | orchestrator | + stateful = (known after apply) 2026-04-11 01:39:00.820621 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.820629 | orchestrator | } 2026-04-11 01:39:00.820833 | orchestrator | 2026-04-11 01:39:00.820857 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-11 01:39:00.820866 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-11 01:39:00.820875 | orchestrator | + all_tags = (known after apply) 2026-04-11 01:39:00.820883 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-11 01:39:00.820891 | orchestrator | + dns_nameservers = [ 2026-04-11 01:39:00.820899 | orchestrator | + "8.8.8.8", 2026-04-11 01:39:00.820907 | orchestrator | + "9.9.9.9", 2026-04-11 01:39:00.820915 | orchestrator | ] 2026-04-11 01:39:00.820923 | orchestrator | + enable_dhcp = true 2026-04-11 01:39:00.820931 | orchestrator | + gateway_ip = (known after apply) 2026-04-11 01:39:00.820939 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.820947 | orchestrator | + ip_version = 4 2026-04-11 01:39:00.820956 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-11 01:39:00.820963 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-11 01:39:00.820971 | orchestrator | + name = "subnet-testbed-management" 2026-04-11 01:39:00.820979 | orchestrator | + network_id = (known after apply) 2026-04-11 01:39:00.820987 | orchestrator | + no_gateway = false 2026-04-11 01:39:00.820995 | orchestrator | + region = (known after apply) 2026-04-11 01:39:00.821003 | orchestrator | + service_types = (known after apply) 2026-04-11 01:39:00.821019 | orchestrator | + tenant_id = (known after apply) 2026-04-11 01:39:00.821027 | orchestrator | 2026-04-11 01:39:00.821035 | orchestrator | + allocation_pool { 2026-04-11 01:39:00.821043 | orchestrator | + end = "192.168.31.250" 2026-04-11 01:39:00.821051 | orchestrator | + start = "192.168.31.200" 2026-04-11 01:39:00.821059 | orchestrator | } 2026-04-11 01:39:00.821067 | orchestrator | } 2026-04-11 01:39:00.821123 | orchestrator | 2026-04-11 01:39:00.821146 | orchestrator | # terraform_data.image will be created 2026-04-11 01:39:00.821155 | orchestrator | + resource "terraform_data" "image" { 2026-04-11 01:39:00.821163 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.821171 | orchestrator | + input = "Ubuntu 24.04" 2026-04-11 01:39:00.821179 | orchestrator | + output = (known after apply) 2026-04-11 01:39:00.821187 | orchestrator | } 2026-04-11 01:39:00.821259 | orchestrator | 2026-04-11 01:39:00.821282 | orchestrator | # terraform_data.image_node will be created 2026-04-11 01:39:00.821291 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-11 01:39:00.821299 | orchestrator | + id = (known after apply) 2026-04-11 01:39:00.821307 | orchestrator | + input = "Ubuntu 24.04" 2026-04-11 01:39:00.821315 | orchestrator | + output = (known after apply) 2026-04-11 01:39:00.821323 | orchestrator | } 2026-04-11 01:39:00.821351 | orchestrator | 2026-04-11 01:39:00.821361 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-11 01:39:00.821383 | orchestrator | 2026-04-11 01:39:00.821392 | orchestrator | Changes to Outputs: 2026-04-11 01:39:00.821412 | orchestrator | + manager_address = (sensitive value) 2026-04-11 01:39:00.821421 | orchestrator | + private_key = (sensitive value) 2026-04-11 01:39:00.925933 | orchestrator | terraform_data.image: Creating... 2026-04-11 01:39:01.087941 | orchestrator | terraform_data.image: Creation complete after 0s [id=304d05f6-24c7-6f36-aeb5-7145d81f8f98] 2026-04-11 01:39:01.090592 | orchestrator | terraform_data.image_node: Creating... 2026-04-11 01:39:01.090655 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=687b0398-d522-b224-3528-635214abe41e] 2026-04-11 01:39:01.096722 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-11 01:39:01.096773 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-11 01:39:01.103621 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-11 01:39:01.105371 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-11 01:39:01.114836 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-11 01:39:01.115921 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-11 01:39:01.118555 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-11 01:39:01.134938 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-11 01:39:01.136400 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-11 01:39:01.139947 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-11 01:39:01.560867 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-11 01:39:01.567806 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-11 01:39:01.626484 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-11 01:39:01.632189 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-11 01:39:01.915842 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-11 01:39:01.920595 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-11 01:39:02.161596 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=2746460b-6e44-4be8-99c3-d032744402fd] 2026-04-11 01:39:02.169541 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-11 01:39:04.713762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=1f351e21-4e71-4ad4-9e94-6bc6cac8fc78] 2026-04-11 01:39:04.725751 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-11 01:39:04.731833 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=5205d61f1e3822ef480b84ba9bf8e1d5a2e9b9d8] 2026-04-11 01:39:04.736450 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=7ad0a670-80b6-4125-8ef3-6216ce6e20ac] 2026-04-11 01:39:04.743283 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-11 01:39:04.743397 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-11 01:39:04.750755 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=6b13cbcd1f0356736391ed26e2e0d0b30b0a750d] 2026-04-11 01:39:04.766278 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=01e94ece-63c1-4d76-b314-73e572c2946f] 2026-04-11 01:39:04.769534 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-11 01:39:04.775082 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-11 01:39:04.783868 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=56bfdd1e-1096-4320-af10-78d4715d0af3] 2026-04-11 01:39:04.794211 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-11 01:39:04.794703 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=17a8d280-644e-4721-8a5f-cc5da3df4735] 2026-04-11 01:39:04.800598 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-11 01:39:04.813449 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=a5d3052c-abdd-49f3-bb0e-d9386ad7b01c] 2026-04-11 01:39:04.825305 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-11 01:39:04.828680 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=16023bbf-7f58-4b7d-abd1-681ece48f898] 2026-04-11 01:39:04.835856 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-11 01:39:04.871626 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=7d9c4f1c-d40b-45bb-8e87-01db3fc808d7] 2026-04-11 01:39:05.127745 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=f4a5e742-034f-4b0e-a516-1096b0558dbb] 2026-04-11 01:39:05.500281 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=cbafe9d3-7c35-4bd1-ae60-dc778a424d68] 2026-04-11 01:39:06.247606 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=f3e8e6aa-68e7-4336-ad95-d6fb37e87258] 2026-04-11 01:39:06.258443 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-11 01:39:08.114297 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=4dd7cb49-c2ed-4736-af78-304fedd57f5a] 2026-04-11 01:39:08.142640 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=122e9594-abc5-4472-bfad-4cda336274d4] 2026-04-11 01:39:08.169756 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=7f54fce7-d818-40b6-a511-c244d10d845a] 2026-04-11 01:39:08.192715 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=6e1b70df-e983-45b9-8c79-0f15e5c6cff7] 2026-04-11 01:39:08.207358 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=1a75c226-2d22-4742-843b-bdb54b765e20] 2026-04-11 01:39:08.473216 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=0c2a3b65-0cab-4606-87f3-af05935d1899] 2026-04-11 01:39:09.311211 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=9cbf9711-487e-4022-b913-dcbe33287bab] 2026-04-11 01:39:09.319432 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-11 01:39:09.320627 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-11 01:39:09.321414 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-11 01:39:09.501004 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=0f13b6f6-9e5c-4624-951b-1dcc4d4806b6] 2026-04-11 01:39:09.507338 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=247672bb-f82c-45c6-82e5-0260df592c26] 2026-04-11 01:39:09.512448 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-11 01:39:09.513047 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-11 01:39:09.518696 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-11 01:39:09.519373 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-11 01:39:09.521915 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-11 01:39:09.521954 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-11 01:39:09.521962 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-11 01:39:09.523928 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-11 01:39:09.533668 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-11 01:39:09.721273 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=1a4cf217-cbc8-4211-ab89-f8007d027eeb] 2026-04-11 01:39:09.732021 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-11 01:39:09.889451 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=b3b97bdf-ba22-4305-9489-abbf653bad18] 2026-04-11 01:39:09.896161 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-11 01:39:10.059002 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=e69386be-5169-4567-bd54-71e3f9f1ab07] 2026-04-11 01:39:10.060876 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=f1053637-17ab-4504-a9d0-06b95f44e425] 2026-04-11 01:39:10.064960 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-11 01:39:10.069631 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-11 01:39:10.080368 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=8d9ba641-9a87-421f-a3e8-7ab9f3f0c5d6] 2026-04-11 01:39:10.084875 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-11 01:39:10.139372 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=244c4a48-aa1e-4963-b559-952e39690767] 2026-04-11 01:39:10.148437 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-11 01:39:10.217189 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=4ffc02fe-d386-4546-b34a-88c1519f38d5] 2026-04-11 01:39:10.222623 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-11 01:39:10.268572 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=19b1ba75-73a0-4a7f-b3d4-61abc99d0b85] 2026-04-11 01:39:10.282862 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=c1025cde-d4c0-4a2d-9e99-f649782ac69e] 2026-04-11 01:39:10.306436 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=51aa4b47-2795-4029-b326-3b61538458dc] 2026-04-11 01:39:10.306663 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=df7bc680-6d48-442f-9fed-4d82823cc7b2] 2026-04-11 01:39:10.430687 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=92ad4902-83b2-4a75-a847-72461faab5c9] 2026-04-11 01:39:10.468515 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=40caa442-a544-4486-be37-7092bd7a44ea] 2026-04-11 01:39:10.581706 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=5ea00f8b-cc69-45ad-a661-1ea9f1c5b2d4] 2026-04-11 01:39:10.614146 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=e65fc00c-da17-489e-8b90-9fbb81a34a34] 2026-04-11 01:39:10.776131 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=61142fce-7afc-42fd-bd23-712173abc97f] 2026-04-11 01:39:11.972452 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=7e71bd33-54b6-43c4-b0a0-c747fc0f3269] 2026-04-11 01:39:11.985805 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-11 01:39:12.012156 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-11 01:39:12.012322 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-11 01:39:12.012612 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-11 01:39:12.013625 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-11 01:39:12.026345 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-11 01:39:12.028170 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-11 01:39:13.242690 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=93162cfa-d81b-48dc-a518-adc5669c8a29] 2026-04-11 01:39:13.252228 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-11 01:39:13.262426 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-11 01:39:13.263879 | orchestrator | local_file.inventory: Creating... 2026-04-11 01:39:13.267609 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d2168664d89ec71279c06448a97f90c18c1aa7bc] 2026-04-11 01:39:13.270377 | orchestrator | local_file.inventory: Creation complete after 0s [id=70e31154fbcb4260fa1fc163fe870437c36eab59] 2026-04-11 01:39:14.550251 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=93162cfa-d81b-48dc-a518-adc5669c8a29] 2026-04-11 01:39:22.012898 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-11 01:39:22.014065 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-11 01:39:22.017535 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-11 01:39:22.017624 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-11 01:39:22.030096 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-11 01:39:22.030175 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-11 01:39:32.013806 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-11 01:39:32.014926 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-11 01:39:32.018423 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-11 01:39:32.018552 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-11 01:39:32.030651 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-11 01:39:32.030726 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-11 01:39:32.409482 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=1d0e72b4-75b3-4828-a559-c0c9ab22a542] 2026-04-11 01:39:32.496819 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=3644bf13-4a69-4675-a3be-bd67bad90a30] 2026-04-11 01:39:32.534931 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=7882a152-b830-4b7a-a91c-00038d7f0cfe] 2026-04-11 01:39:42.023478 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-11 01:39:42.023603 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-11 01:39:42.031906 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-11 01:39:42.661542 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=acead4e4-f039-4a0d-9565-e40f684551ad] 2026-04-11 01:39:42.735283 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=3cc9959b-0c3d-42d8-ac13-fcc299998e2f] 2026-04-11 01:39:42.762106 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=acf5ee9a-6f55-4c93-bae4-dbf9b89a3231] 2026-04-11 01:39:42.785623 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-11 01:39:42.791125 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-11 01:39:42.797552 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-11 01:39:42.799381 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-11 01:39:42.800128 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-11 01:39:42.800592 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-11 01:39:42.805834 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-11 01:39:42.809716 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4754050267506484887] 2026-04-11 01:39:42.810396 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-11 01:39:42.810951 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-11 01:39:42.811879 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-11 01:39:42.845450 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-11 01:39:46.137767 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=acf5ee9a-6f55-4c93-bae4-dbf9b89a3231/7ad0a670-80b6-4125-8ef3-6216ce6e20ac] 2026-04-11 01:39:46.162587 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=acead4e4-f039-4a0d-9565-e40f684551ad/17a8d280-644e-4721-8a5f-cc5da3df4735] 2026-04-11 01:39:46.167201 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=acf5ee9a-6f55-4c93-bae4-dbf9b89a3231/01e94ece-63c1-4d76-b314-73e572c2946f] 2026-04-11 01:39:46.201305 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=acead4e4-f039-4a0d-9565-e40f684551ad/1f351e21-4e71-4ad4-9e94-6bc6cac8fc78] 2026-04-11 01:39:46.201691 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=3644bf13-4a69-4675-a3be-bd67bad90a30/7d9c4f1c-d40b-45bb-8e87-01db3fc808d7] 2026-04-11 01:39:46.249975 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=3644bf13-4a69-4675-a3be-bd67bad90a30/16023bbf-7f58-4b7d-abd1-681ece48f898] 2026-04-11 01:39:52.299138 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=acf5ee9a-6f55-4c93-bae4-dbf9b89a3231/f4a5e742-034f-4b0e-a516-1096b0558dbb] 2026-04-11 01:39:52.315903 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=acead4e4-f039-4a0d-9565-e40f684551ad/56bfdd1e-1096-4320-af10-78d4715d0af3] 2026-04-11 01:39:52.347575 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=3644bf13-4a69-4675-a3be-bd67bad90a30/a5d3052c-abdd-49f3-bb0e-d9386ad7b01c] 2026-04-11 01:39:52.846204 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-11 01:40:02.847170 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-11 01:40:03.592488 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=14478796-d425-4e4a-9633-586587186d3e] 2026-04-11 01:40:03.624316 | orchestrator | 2026-04-11 01:40:03.624383 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-11 01:40:03.624390 | orchestrator | 2026-04-11 01:40:03.624395 | orchestrator | Outputs: 2026-04-11 01:40:03.624399 | orchestrator | 2026-04-11 01:40:03.624403 | orchestrator | manager_address = 2026-04-11 01:40:03.624408 | orchestrator | private_key = 2026-04-11 01:40:03.769342 | orchestrator | ok: Runtime: 0:01:07.480555 2026-04-11 01:40:03.803304 | 2026-04-11 01:40:03.803437 | TASK [Fetch manager address] 2026-04-11 01:40:04.304322 | orchestrator | ok 2026-04-11 01:40:04.314184 | 2026-04-11 01:40:04.314307 | TASK [Set manager_host address] 2026-04-11 01:40:04.393651 | orchestrator | ok 2026-04-11 01:40:04.403804 | 2026-04-11 01:40:04.403925 | LOOP [Update ansible collections] 2026-04-11 01:40:05.355493 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-11 01:40:05.355840 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-11 01:40:05.355935 | orchestrator | Starting galaxy collection install process 2026-04-11 01:40:05.355979 | orchestrator | Process install dependency map 2026-04-11 01:40:05.356011 | orchestrator | Starting collection install process 2026-04-11 01:40:05.356062 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-04-11 01:40:05.356096 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-04-11 01:40:05.356131 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-11 01:40:05.356202 | orchestrator | ok: Item: commons Runtime: 0:00:00.593481 2026-04-11 01:40:06.374331 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-11 01:40:06.374493 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-11 01:40:06.374541 | orchestrator | Starting galaxy collection install process 2026-04-11 01:40:06.374576 | orchestrator | Process install dependency map 2026-04-11 01:40:06.374697 | orchestrator | Starting collection install process 2026-04-11 01:40:06.374736 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-04-11 01:40:06.374769 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-04-11 01:40:06.374800 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-11 01:40:06.374876 | orchestrator | ok: Item: services Runtime: 0:00:00.687569 2026-04-11 01:40:06.394406 | 2026-04-11 01:40:06.394578 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-11 01:40:16.985699 | orchestrator | ok 2026-04-11 01:40:16.997454 | 2026-04-11 01:40:16.997588 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-11 01:41:17.042269 | orchestrator | ok 2026-04-11 01:41:17.053511 | 2026-04-11 01:41:17.053633 | TASK [Fetch manager ssh hostkey] 2026-04-11 01:41:18.634892 | orchestrator | Output suppressed because no_log was given 2026-04-11 01:41:18.652206 | 2026-04-11 01:41:18.652385 | TASK [Get ssh keypair from terraform environment] 2026-04-11 01:41:19.189208 | orchestrator | ok: Runtime: 0:00:00.009874 2026-04-11 01:41:19.205821 | 2026-04-11 01:41:19.205989 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-11 01:41:19.253386 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-11 01:41:19.265603 | 2026-04-11 01:41:19.265773 | TASK [Run manager part 0] 2026-04-11 01:41:20.268416 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-11 01:41:20.331937 | orchestrator | 2026-04-11 01:41:20.332005 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-11 01:41:20.332018 | orchestrator | 2026-04-11 01:41:20.332038 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-11 01:41:22.527833 | orchestrator | ok: [testbed-manager] 2026-04-11 01:41:22.527917 | orchestrator | 2026-04-11 01:41:22.527943 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-11 01:41:22.527952 | orchestrator | 2026-04-11 01:41:22.527962 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 01:41:24.549598 | orchestrator | ok: [testbed-manager] 2026-04-11 01:41:24.549645 | orchestrator | 2026-04-11 01:41:24.549652 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-11 01:41:25.298731 | orchestrator | ok: [testbed-manager] 2026-04-11 01:41:25.299107 | orchestrator | 2026-04-11 01:41:25.299137 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-11 01:41:25.350503 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:41:25.350581 | orchestrator | 2026-04-11 01:41:25.350599 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-11 01:41:25.394231 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:41:25.394300 | orchestrator | 2026-04-11 01:41:25.394313 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-11 01:41:25.441699 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:41:25.441771 | orchestrator | 2026-04-11 01:41:25.441782 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-11 01:41:26.307957 | orchestrator | changed: [testbed-manager] 2026-04-11 01:41:26.308023 | orchestrator | 2026-04-11 01:41:26.308032 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-11 01:44:26.869925 | orchestrator | changed: [testbed-manager] 2026-04-11 01:44:26.870063 | orchestrator | 2026-04-11 01:44:26.870097 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-11 01:46:05.274007 | orchestrator | changed: [testbed-manager] 2026-04-11 01:46:05.274101 | orchestrator | 2026-04-11 01:46:05.274119 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-11 01:46:31.303714 | orchestrator | changed: [testbed-manager] 2026-04-11 01:46:31.303753 | orchestrator | 2026-04-11 01:46:31.303760 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-11 01:46:41.578683 | orchestrator | changed: [testbed-manager] 2026-04-11 01:46:41.578731 | orchestrator | 2026-04-11 01:46:41.578740 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-11 01:46:41.617043 | orchestrator | ok: [testbed-manager] 2026-04-11 01:46:41.617081 | orchestrator | 2026-04-11 01:46:41.617090 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-11 01:46:42.477055 | orchestrator | ok: [testbed-manager] 2026-04-11 01:46:42.477141 | orchestrator | 2026-04-11 01:46:42.477168 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-11 01:46:43.287923 | orchestrator | changed: [testbed-manager] 2026-04-11 01:46:43.288047 | orchestrator | 2026-04-11 01:46:43.288072 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-11 01:46:50.189313 | orchestrator | changed: [testbed-manager] 2026-04-11 01:46:50.189362 | orchestrator | 2026-04-11 01:46:50.189371 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-11 01:46:56.831223 | orchestrator | changed: [testbed-manager] 2026-04-11 01:46:56.831373 | orchestrator | 2026-04-11 01:46:56.831393 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-11 01:46:59.909725 | orchestrator | changed: [testbed-manager] 2026-04-11 01:46:59.909828 | orchestrator | 2026-04-11 01:46:59.909846 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-11 01:47:01.946493 | orchestrator | changed: [testbed-manager] 2026-04-11 01:47:01.947290 | orchestrator | 2026-04-11 01:47:01.947315 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-11 01:47:03.158141 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-11 01:47:03.158198 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-11 01:47:03.158204 | orchestrator | 2026-04-11 01:47:03.158212 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-11 01:47:03.193424 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-11 01:47:03.193508 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-11 01:47:03.193519 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-11 01:47:03.193527 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-11 01:47:06.492356 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-11 01:47:06.492420 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-11 01:47:06.492431 | orchestrator | 2026-04-11 01:47:06.492441 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-11 01:47:07.062569 | orchestrator | changed: [testbed-manager] 2026-04-11 01:47:07.062632 | orchestrator | 2026-04-11 01:47:07.062646 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-11 01:48:28.216710 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-11 01:48:28.216786 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-11 01:48:28.216799 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-11 01:48:28.216809 | orchestrator | 2026-04-11 01:48:28.216821 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-11 01:48:30.711787 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-11 01:48:30.711871 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-11 01:48:30.711891 | orchestrator | 2026-04-11 01:48:30.711912 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-11 01:48:30.711923 | orchestrator | 2026-04-11 01:48:30.711934 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 01:48:32.211795 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:32.211920 | orchestrator | 2026-04-11 01:48:32.211938 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-11 01:48:32.256477 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:32.256571 | orchestrator | 2026-04-11 01:48:32.256588 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-11 01:48:32.319174 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:32.319279 | orchestrator | 2026-04-11 01:48:32.319296 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-11 01:48:33.164190 | orchestrator | changed: [testbed-manager] 2026-04-11 01:48:33.164260 | orchestrator | 2026-04-11 01:48:33.164271 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-11 01:48:33.950601 | orchestrator | changed: [testbed-manager] 2026-04-11 01:48:33.950705 | orchestrator | 2026-04-11 01:48:33.950728 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-11 01:48:35.419896 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-11 01:48:35.419991 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-11 01:48:35.420006 | orchestrator | 2026-04-11 01:48:35.420019 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-11 01:48:36.944456 | orchestrator | changed: [testbed-manager] 2026-04-11 01:48:36.944552 | orchestrator | 2026-04-11 01:48:36.944571 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-11 01:48:38.875623 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 01:48:38.875693 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-11 01:48:38.875710 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-11 01:48:38.875714 | orchestrator | 2026-04-11 01:48:38.875719 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-11 01:48:38.936524 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:38.936600 | orchestrator | 2026-04-11 01:48:38.936611 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-11 01:48:39.016128 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:39.016171 | orchestrator | 2026-04-11 01:48:39.016179 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-11 01:48:39.616404 | orchestrator | changed: [testbed-manager] 2026-04-11 01:48:39.616446 | orchestrator | 2026-04-11 01:48:39.616455 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-11 01:48:39.684412 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:39.684541 | orchestrator | 2026-04-11 01:48:39.684566 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-11 01:48:40.669930 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 01:48:40.670091 | orchestrator | changed: [testbed-manager] 2026-04-11 01:48:40.670111 | orchestrator | 2026-04-11 01:48:40.670125 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-11 01:48:40.712521 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:40.712620 | orchestrator | 2026-04-11 01:48:40.712642 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-11 01:48:40.750253 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:40.750349 | orchestrator | 2026-04-11 01:48:40.750367 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-11 01:48:40.791785 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:40.791881 | orchestrator | 2026-04-11 01:48:40.791898 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-11 01:48:40.873270 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:40.873365 | orchestrator | 2026-04-11 01:48:40.873433 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-11 01:48:41.626010 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:41.626134 | orchestrator | 2026-04-11 01:48:41.626150 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-11 01:48:41.626162 | orchestrator | 2026-04-11 01:48:41.626176 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 01:48:43.113260 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:43.113357 | orchestrator | 2026-04-11 01:48:43.113374 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-11 01:48:44.145496 | orchestrator | changed: [testbed-manager] 2026-04-11 01:48:44.145595 | orchestrator | 2026-04-11 01:48:44.145613 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:48:44.145627 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-11 01:48:44.145638 | orchestrator | 2026-04-11 01:48:44.575964 | orchestrator | ok: Runtime: 0:07:24.664343 2026-04-11 01:48:44.593277 | 2026-04-11 01:48:44.593429 | TASK [Point out that the log in on the manager is now possible] 2026-04-11 01:48:44.639444 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-11 01:48:44.648870 | 2026-04-11 01:48:44.648993 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-11 01:48:44.680842 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-11 01:48:44.688350 | 2026-04-11 01:48:44.688459 | TASK [Run manager part 1 + 2] 2026-04-11 01:48:45.575967 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-11 01:48:45.634846 | orchestrator | 2026-04-11 01:48:45.634896 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-11 01:48:45.634903 | orchestrator | 2026-04-11 01:48:45.634916 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 01:48:48.896553 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:48.896633 | orchestrator | 2026-04-11 01:48:48.896676 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-11 01:48:48.929102 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:48.929184 | orchestrator | 2026-04-11 01:48:48.929195 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-11 01:48:48.979186 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:48.979254 | orchestrator | 2026-04-11 01:48:48.979271 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-11 01:48:49.035837 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:49.035913 | orchestrator | 2026-04-11 01:48:49.035926 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-11 01:48:49.104502 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:49.104552 | orchestrator | 2026-04-11 01:48:49.104560 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-11 01:48:49.174603 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:49.174656 | orchestrator | 2026-04-11 01:48:49.174664 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-11 01:48:49.231205 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-11 01:48:49.231270 | orchestrator | 2026-04-11 01:48:49.231280 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-11 01:48:50.021380 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:50.021460 | orchestrator | 2026-04-11 01:48:50.021470 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-11 01:48:50.069511 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:48:50.069577 | orchestrator | 2026-04-11 01:48:50.069588 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-11 01:48:51.551859 | orchestrator | changed: [testbed-manager] 2026-04-11 01:48:51.551921 | orchestrator | 2026-04-11 01:48:51.551930 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-11 01:48:52.188411 | orchestrator | ok: [testbed-manager] 2026-04-11 01:48:52.188564 | orchestrator | 2026-04-11 01:48:52.188582 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-11 01:48:53.392151 | orchestrator | changed: [testbed-manager] 2026-04-11 01:48:53.392263 | orchestrator | 2026-04-11 01:48:53.392301 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-11 01:49:10.349765 | orchestrator | changed: [testbed-manager] 2026-04-11 01:49:10.349834 | orchestrator | 2026-04-11 01:49:10.349850 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-11 01:49:11.100970 | orchestrator | ok: [testbed-manager] 2026-04-11 01:49:11.101006 | orchestrator | 2026-04-11 01:49:11.101013 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-11 01:49:11.150260 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:49:11.150298 | orchestrator | 2026-04-11 01:49:11.150305 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-11 01:49:12.192683 | orchestrator | changed: [testbed-manager] 2026-04-11 01:49:12.192732 | orchestrator | 2026-04-11 01:49:12.192743 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-11 01:49:13.241783 | orchestrator | changed: [testbed-manager] 2026-04-11 01:49:13.241885 | orchestrator | 2026-04-11 01:49:13.241902 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-11 01:49:13.910813 | orchestrator | changed: [testbed-manager] 2026-04-11 01:49:13.910905 | orchestrator | 2026-04-11 01:49:13.910921 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-11 01:49:13.952583 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-11 01:49:13.952699 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-11 01:49:13.952715 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-11 01:49:13.952728 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-11 01:49:16.139488 | orchestrator | changed: [testbed-manager] 2026-04-11 01:49:16.139732 | orchestrator | 2026-04-11 01:49:16.139747 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-11 01:49:25.999527 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-11 01:49:25.999621 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-11 01:49:25.999637 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-11 01:49:25.999650 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-11 01:49:25.999668 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-11 01:49:25.999680 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-11 01:49:25.999690 | orchestrator | 2026-04-11 01:49:25.999703 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-11 01:49:27.105868 | orchestrator | changed: [testbed-manager] 2026-04-11 01:49:27.105908 | orchestrator | 2026-04-11 01:49:27.105915 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-11 01:49:30.480650 | orchestrator | changed: [testbed-manager] 2026-04-11 01:49:30.480707 | orchestrator | 2026-04-11 01:49:30.480720 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-11 01:49:30.513894 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:49:30.513989 | orchestrator | 2026-04-11 01:49:30.514008 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-11 01:51:23.662148 | orchestrator | changed: [testbed-manager] 2026-04-11 01:51:23.662188 | orchestrator | 2026-04-11 01:51:23.662194 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-11 01:51:24.936556 | orchestrator | ok: [testbed-manager] 2026-04-11 01:51:24.936599 | orchestrator | 2026-04-11 01:51:24.936608 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:51:24.936615 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-11 01:51:24.936620 | orchestrator | 2026-04-11 01:51:25.333762 | orchestrator | ok: Runtime: 0:02:40.030026 2026-04-11 01:51:25.348149 | 2026-04-11 01:51:25.348263 | TASK [Reboot manager] 2026-04-11 01:51:26.886146 | orchestrator | ok: Runtime: 0:00:01.110222 2026-04-11 01:51:26.903350 | 2026-04-11 01:51:26.903518 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-11 01:51:44.214456 | orchestrator | ok 2026-04-11 01:51:44.225340 | 2026-04-11 01:51:44.225503 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-11 01:52:44.273646 | orchestrator | ok 2026-04-11 01:52:44.283216 | 2026-04-11 01:52:44.283351 | TASK [Deploy manager + bootstrap nodes] 2026-04-11 01:52:47.251466 | orchestrator | 2026-04-11 01:52:47.251676 | orchestrator | # DEPLOY MANAGER 2026-04-11 01:52:47.252378 | orchestrator | 2026-04-11 01:52:47.252404 | orchestrator | + set -e 2026-04-11 01:52:47.252412 | orchestrator | + echo 2026-04-11 01:52:47.252422 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-11 01:52:47.252435 | orchestrator | + echo 2026-04-11 01:52:47.252470 | orchestrator | + cat /opt/manager-vars.sh 2026-04-11 01:52:47.254484 | orchestrator | export NUMBER_OF_NODES=6 2026-04-11 01:52:47.254523 | orchestrator | 2026-04-11 01:52:47.254532 | orchestrator | export CEPH_VERSION=reef 2026-04-11 01:52:47.254541 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-11 01:52:47.254549 | orchestrator | export MANAGER_VERSION=9.5.0 2026-04-11 01:52:47.254564 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-11 01:52:47.254571 | orchestrator | 2026-04-11 01:52:47.254582 | orchestrator | export ARA=false 2026-04-11 01:52:47.254589 | orchestrator | export DEPLOY_MODE=manager 2026-04-11 01:52:47.254600 | orchestrator | export TEMPEST=false 2026-04-11 01:52:47.254607 | orchestrator | export IS_ZUUL=true 2026-04-11 01:52:47.254614 | orchestrator | 2026-04-11 01:52:47.254626 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 01:52:47.254635 | orchestrator | export EXTERNAL_API=false 2026-04-11 01:52:47.254642 | orchestrator | 2026-04-11 01:52:47.254649 | orchestrator | export IMAGE_USER=ubuntu 2026-04-11 01:52:47.254658 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-11 01:52:47.254665 | orchestrator | 2026-04-11 01:52:47.254672 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-11 01:52:47.254677 | orchestrator | 2026-04-11 01:52:47.254683 | orchestrator | + echo 2026-04-11 01:52:47.254693 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 01:52:47.255428 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 01:52:47.255447 | orchestrator | ++ INTERACTIVE=false 2026-04-11 01:52:47.255455 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 01:52:47.255463 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 01:52:47.255819 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 01:52:47.255831 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 01:52:47.255839 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 01:52:47.255846 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 01:52:47.255853 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 01:52:47.255860 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 01:52:47.255868 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 01:52:47.255875 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 01:52:47.255882 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 01:52:47.255889 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 01:52:47.255906 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 01:52:47.255914 | orchestrator | ++ export ARA=false 2026-04-11 01:52:47.255921 | orchestrator | ++ ARA=false 2026-04-11 01:52:47.255927 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 01:52:47.255934 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 01:52:47.255941 | orchestrator | ++ export TEMPEST=false 2026-04-11 01:52:47.255948 | orchestrator | ++ TEMPEST=false 2026-04-11 01:52:47.255959 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 01:52:47.255966 | orchestrator | ++ IS_ZUUL=true 2026-04-11 01:52:47.255973 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 01:52:47.255980 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 01:52:47.255986 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 01:52:47.255993 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 01:52:47.256003 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 01:52:47.256010 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 01:52:47.256017 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 01:52:47.256024 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 01:52:47.256031 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 01:52:47.256038 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 01:52:47.256045 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-11 01:52:47.310335 | orchestrator | + docker version 2026-04-11 01:52:47.428769 | orchestrator | Client: Docker Engine - Community 2026-04-11 01:52:47.428854 | orchestrator | Version: 27.5.1 2026-04-11 01:52:47.428867 | orchestrator | API version: 1.47 2026-04-11 01:52:47.428876 | orchestrator | Go version: go1.22.11 2026-04-11 01:52:47.428884 | orchestrator | Git commit: 9f9e405 2026-04-11 01:52:47.428892 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-11 01:52:47.428902 | orchestrator | OS/Arch: linux/amd64 2026-04-11 01:52:47.429357 | orchestrator | Context: default 2026-04-11 01:52:47.429445 | orchestrator | 2026-04-11 01:52:47.429461 | orchestrator | Server: Docker Engine - Community 2026-04-11 01:52:47.429471 | orchestrator | Engine: 2026-04-11 01:52:47.429481 | orchestrator | Version: 27.5.1 2026-04-11 01:52:47.429489 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-11 01:52:47.429525 | orchestrator | Go version: go1.22.11 2026-04-11 01:52:47.429534 | orchestrator | Git commit: 4c9b3b0 2026-04-11 01:52:47.429542 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-11 01:52:47.429551 | orchestrator | OS/Arch: linux/amd64 2026-04-11 01:52:47.429559 | orchestrator | Experimental: false 2026-04-11 01:52:47.429567 | orchestrator | containerd: 2026-04-11 01:52:47.429575 | orchestrator | Version: v2.2.2 2026-04-11 01:52:47.429583 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-11 01:52:47.429591 | orchestrator | runc: 2026-04-11 01:52:47.429599 | orchestrator | Version: 1.3.4 2026-04-11 01:52:47.429607 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-11 01:52:47.429616 | orchestrator | docker-init: 2026-04-11 01:52:47.429624 | orchestrator | Version: 0.19.0 2026-04-11 01:52:47.429632 | orchestrator | GitCommit: de40ad0 2026-04-11 01:52:47.432260 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-11 01:52:47.441806 | orchestrator | + set -e 2026-04-11 01:52:47.441905 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 01:52:47.441924 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 01:52:47.441934 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 01:52:47.441942 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 01:52:47.441951 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 01:52:47.441960 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 01:52:47.441970 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 01:52:47.441978 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 01:52:47.441987 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 01:52:47.441996 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 01:52:47.442004 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 01:52:47.442066 | orchestrator | ++ export ARA=false 2026-04-11 01:52:47.442084 | orchestrator | ++ ARA=false 2026-04-11 01:52:47.442099 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 01:52:47.442114 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 01:52:47.442130 | orchestrator | ++ export TEMPEST=false 2026-04-11 01:52:47.442173 | orchestrator | ++ TEMPEST=false 2026-04-11 01:52:47.442184 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 01:52:47.442193 | orchestrator | ++ IS_ZUUL=true 2026-04-11 01:52:47.442202 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 01:52:47.442212 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 01:52:47.442221 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 01:52:47.442230 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 01:52:47.442238 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 01:52:47.442247 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 01:52:47.442256 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 01:52:47.442265 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 01:52:47.442273 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 01:52:47.442282 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 01:52:47.442291 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 01:52:47.442300 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 01:52:47.442308 | orchestrator | ++ INTERACTIVE=false 2026-04-11 01:52:47.442317 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 01:52:47.442330 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 01:52:47.442350 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-11 01:52:47.442360 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-04-11 01:52:47.452103 | orchestrator | + set -e 2026-04-11 01:52:47.452249 | orchestrator | + VERSION=9.5.0 2026-04-11 01:52:47.452271 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-11 01:52:47.461937 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-11 01:52:47.462075 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-11 01:52:47.467020 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-11 01:52:47.471065 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-11 01:52:47.480085 | orchestrator | /opt/configuration ~ 2026-04-11 01:52:47.480170 | orchestrator | + set -e 2026-04-11 01:52:47.480180 | orchestrator | + pushd /opt/configuration 2026-04-11 01:52:47.480186 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 01:52:47.483621 | orchestrator | + source /opt/venv/bin/activate 2026-04-11 01:52:47.484711 | orchestrator | ++ deactivate nondestructive 2026-04-11 01:52:47.484762 | orchestrator | ++ '[' -n '' ']' 2026-04-11 01:52:47.484780 | orchestrator | ++ '[' -n '' ']' 2026-04-11 01:52:47.484823 | orchestrator | ++ hash -r 2026-04-11 01:52:47.484838 | orchestrator | ++ '[' -n '' ']' 2026-04-11 01:52:47.484851 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-11 01:52:47.484865 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-11 01:52:47.484878 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-11 01:52:47.484902 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-11 01:52:47.484915 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-11 01:52:47.484929 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-11 01:52:47.484941 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-11 01:52:47.484955 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 01:52:47.484969 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 01:52:47.484982 | orchestrator | ++ export PATH 2026-04-11 01:52:47.484997 | orchestrator | ++ '[' -n '' ']' 2026-04-11 01:52:47.485016 | orchestrator | ++ '[' -z '' ']' 2026-04-11 01:52:47.485031 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-11 01:52:47.485045 | orchestrator | ++ PS1='(venv) ' 2026-04-11 01:52:47.485059 | orchestrator | ++ export PS1 2026-04-11 01:52:47.485073 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-11 01:52:47.485087 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-11 01:52:47.485101 | orchestrator | ++ hash -r 2026-04-11 01:52:47.485286 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-11 01:52:48.958773 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-11 01:52:48.959940 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-11 01:52:48.961690 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-11 01:52:48.963332 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-11 01:52:48.964882 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-11 01:52:48.976845 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-11 01:52:48.979490 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-11 01:52:48.980518 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-11 01:52:48.982185 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-11 01:52:49.026708 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-11 01:52:49.028338 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-11 01:52:49.030099 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-11 01:52:49.031564 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-11 01:52:49.036043 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-11 01:52:49.283640 | orchestrator | ++ which gilt 2026-04-11 01:52:49.288332 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-11 01:52:49.288413 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-11 01:52:49.573033 | orchestrator | osism.cfg-generics: 2026-04-11 01:52:49.717220 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-11 01:52:49.718045 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-11 01:52:49.719249 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-11 01:52:49.719312 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-11 01:52:50.653819 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-11 01:52:50.661729 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-11 01:52:51.059519 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-11 01:52:51.122746 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 01:52:51.122854 | orchestrator | + deactivate 2026-04-11 01:52:51.122870 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-11 01:52:51.122884 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 01:52:51.122895 | orchestrator | + export PATH 2026-04-11 01:52:51.122907 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-11 01:52:51.122918 | orchestrator | + '[' -n '' ']' 2026-04-11 01:52:51.122932 | orchestrator | + hash -r 2026-04-11 01:52:51.122944 | orchestrator | ~ 2026-04-11 01:52:51.122955 | orchestrator | + '[' -n '' ']' 2026-04-11 01:52:51.122966 | orchestrator | + unset VIRTUAL_ENV 2026-04-11 01:52:51.122977 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-11 01:52:51.122988 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-11 01:52:51.122999 | orchestrator | + unset -f deactivate 2026-04-11 01:52:51.123011 | orchestrator | + popd 2026-04-11 01:52:51.124979 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-11 01:52:51.125011 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-11 01:52:51.125995 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-11 01:52:51.183195 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 01:52:51.183297 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-11 01:52:51.183932 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-11 01:52:51.236014 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-11 01:52:51.236488 | orchestrator | ++ semver 2024.2 2025.1 2026-04-11 01:52:51.297634 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-11 01:52:51.297772 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-11 01:52:51.402651 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 01:52:51.402745 | orchestrator | + source /opt/venv/bin/activate 2026-04-11 01:52:51.402756 | orchestrator | ++ deactivate nondestructive 2026-04-11 01:52:51.402764 | orchestrator | ++ '[' -n '' ']' 2026-04-11 01:52:51.402771 | orchestrator | ++ '[' -n '' ']' 2026-04-11 01:52:51.402778 | orchestrator | ++ hash -r 2026-04-11 01:52:51.402785 | orchestrator | ++ '[' -n '' ']' 2026-04-11 01:52:51.402792 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-11 01:52:51.402798 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-11 01:52:51.402805 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-11 01:52:51.402847 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-11 01:52:51.402855 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-11 01:52:51.402862 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-11 01:52:51.402868 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-11 01:52:51.402876 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 01:52:51.402904 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 01:52:51.402912 | orchestrator | ++ export PATH 2026-04-11 01:52:51.403301 | orchestrator | ++ '[' -n '' ']' 2026-04-11 01:52:51.403459 | orchestrator | ++ '[' -z '' ']' 2026-04-11 01:52:51.403472 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-11 01:52:51.403480 | orchestrator | ++ PS1='(venv) ' 2026-04-11 01:52:51.403564 | orchestrator | ++ export PS1 2026-04-11 01:52:51.403572 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-11 01:52:51.403579 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-11 01:52:51.403586 | orchestrator | ++ hash -r 2026-04-11 01:52:51.403871 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-11 01:52:52.889693 | orchestrator | 2026-04-11 01:52:52.889824 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-11 01:52:52.889841 | orchestrator | 2026-04-11 01:52:52.889854 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-11 01:52:53.516537 | orchestrator | ok: [testbed-manager] 2026-04-11 01:52:53.516622 | orchestrator | 2026-04-11 01:52:53.516631 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-11 01:52:54.628494 | orchestrator | changed: [testbed-manager] 2026-04-11 01:52:54.628595 | orchestrator | 2026-04-11 01:52:54.628612 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-11 01:52:54.628706 | orchestrator | 2026-04-11 01:52:54.628732 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 01:52:57.208340 | orchestrator | ok: [testbed-manager] 2026-04-11 01:52:57.208456 | orchestrator | 2026-04-11 01:52:57.208480 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-11 01:52:57.272597 | orchestrator | ok: [testbed-manager] 2026-04-11 01:52:57.272700 | orchestrator | 2026-04-11 01:52:57.272717 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-11 01:52:57.772457 | orchestrator | changed: [testbed-manager] 2026-04-11 01:52:57.772581 | orchestrator | 2026-04-11 01:52:57.772609 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-11 01:52:57.826583 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:52:57.826677 | orchestrator | 2026-04-11 01:52:57.826693 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-11 01:52:58.212559 | orchestrator | changed: [testbed-manager] 2026-04-11 01:52:58.212656 | orchestrator | 2026-04-11 01:52:58.212672 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-11 01:52:58.582402 | orchestrator | ok: [testbed-manager] 2026-04-11 01:52:58.582502 | orchestrator | 2026-04-11 01:52:58.582518 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-11 01:52:58.705581 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:52:58.705668 | orchestrator | 2026-04-11 01:52:58.705682 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-11 01:52:58.705692 | orchestrator | 2026-04-11 01:52:58.705702 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 01:53:00.648469 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:00.648613 | orchestrator | 2026-04-11 01:53:00.648629 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-11 01:53:00.780271 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-11 01:53:00.780371 | orchestrator | 2026-04-11 01:53:00.780386 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-11 01:53:00.834695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-11 01:53:00.834792 | orchestrator | 2026-04-11 01:53:00.834807 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-11 01:53:02.025067 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-11 01:53:02.025157 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-11 01:53:02.025194 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-11 01:53:02.025205 | orchestrator | 2026-04-11 01:53:02.025217 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-11 01:53:04.053753 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-11 01:53:04.053871 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-11 01:53:04.053889 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-11 01:53:04.053906 | orchestrator | 2026-04-11 01:53:04.053923 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-11 01:53:04.782398 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 01:53:04.782526 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:04.782553 | orchestrator | 2026-04-11 01:53:04.782573 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-11 01:53:05.485959 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 01:53:05.486120 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:05.486139 | orchestrator | 2026-04-11 01:53:05.486152 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-11 01:53:05.552509 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:53:05.552629 | orchestrator | 2026-04-11 01:53:05.552656 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-11 01:53:05.932655 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:05.932752 | orchestrator | 2026-04-11 01:53:05.932768 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-11 01:53:06.015775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-11 01:53:06.015864 | orchestrator | 2026-04-11 01:53:06.015876 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-11 01:53:07.230320 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:07.230402 | orchestrator | 2026-04-11 01:53:07.230409 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-11 01:53:08.193934 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:08.194073 | orchestrator | 2026-04-11 01:53:08.194090 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-11 01:53:23.458500 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:23.458596 | orchestrator | 2026-04-11 01:53:23.458607 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-11 01:53:23.534541 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:53:23.534635 | orchestrator | 2026-04-11 01:53:23.534671 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-11 01:53:23.534683 | orchestrator | 2026-04-11 01:53:23.534694 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 01:53:25.545121 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:25.545207 | orchestrator | 2026-04-11 01:53:25.545292 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-11 01:53:25.693372 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-11 01:53:25.693461 | orchestrator | 2026-04-11 01:53:25.693475 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-11 01:53:25.746547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 01:53:25.746644 | orchestrator | 2026-04-11 01:53:25.746660 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-11 01:53:28.620625 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:28.620728 | orchestrator | 2026-04-11 01:53:28.620753 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-11 01:53:28.681949 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:28.682146 | orchestrator | 2026-04-11 01:53:28.682177 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-11 01:53:28.849551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-11 01:53:28.849654 | orchestrator | 2026-04-11 01:53:28.849677 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-11 01:53:31.981890 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-11 01:53:31.982086 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-11 01:53:31.982116 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-11 01:53:31.982136 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-11 01:53:31.982156 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-11 01:53:31.982175 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-11 01:53:31.982194 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-11 01:53:31.982206 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-11 01:53:31.982217 | orchestrator | 2026-04-11 01:53:31.982229 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-11 01:53:32.712594 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:32.712673 | orchestrator | 2026-04-11 01:53:32.712684 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-11 01:53:33.409710 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:33.409798 | orchestrator | 2026-04-11 01:53:33.409811 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-11 01:53:33.502385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-11 01:53:33.502511 | orchestrator | 2026-04-11 01:53:33.502539 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-11 01:53:34.867807 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-11 01:53:34.867913 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-11 01:53:34.867928 | orchestrator | 2026-04-11 01:53:34.867942 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-11 01:53:35.592783 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:35.592885 | orchestrator | 2026-04-11 01:53:35.592903 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-11 01:53:35.653398 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:53:35.653498 | orchestrator | 2026-04-11 01:53:35.653514 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-11 01:53:35.750626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-11 01:53:35.750730 | orchestrator | 2026-04-11 01:53:35.750762 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-11 01:53:36.447052 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:36.447164 | orchestrator | 2026-04-11 01:53:36.447180 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-11 01:53:36.521581 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-11 01:53:36.521686 | orchestrator | 2026-04-11 01:53:36.521703 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-11 01:53:38.053456 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 01:53:38.053551 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 01:53:38.053569 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:38.053584 | orchestrator | 2026-04-11 01:53:38.053596 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-11 01:53:38.761052 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:38.761154 | orchestrator | 2026-04-11 01:53:38.761171 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-11 01:53:38.817651 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:53:38.817723 | orchestrator | 2026-04-11 01:53:38.817731 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-11 01:53:38.922237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-11 01:53:38.922366 | orchestrator | 2026-04-11 01:53:38.922382 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-11 01:53:39.532582 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:39.532683 | orchestrator | 2026-04-11 01:53:39.532699 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-11 01:53:39.974371 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:39.975431 | orchestrator | 2026-04-11 01:53:39.975490 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-11 01:53:41.375442 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-11 01:53:41.375528 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-11 01:53:41.375539 | orchestrator | 2026-04-11 01:53:41.375548 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-11 01:53:42.130847 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:42.130921 | orchestrator | 2026-04-11 01:53:42.130929 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-11 01:53:42.557487 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:42.557600 | orchestrator | 2026-04-11 01:53:42.557616 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-11 01:53:43.060442 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:43.060574 | orchestrator | 2026-04-11 01:53:43.060592 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-11 01:53:43.115852 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:53:43.115931 | orchestrator | 2026-04-11 01:53:43.115939 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-11 01:53:43.201878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-11 01:53:43.202109 | orchestrator | 2026-04-11 01:53:43.202142 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-11 01:53:43.250710 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:43.250827 | orchestrator | 2026-04-11 01:53:43.250848 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-11 01:53:45.428803 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-11 01:53:45.429769 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-11 01:53:45.429811 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-11 01:53:45.429824 | orchestrator | 2026-04-11 01:53:45.429837 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-11 01:53:46.190860 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:46.190990 | orchestrator | 2026-04-11 01:53:46.191017 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-11 01:53:46.991687 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:46.991789 | orchestrator | 2026-04-11 01:53:46.991804 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-11 01:53:47.887995 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:47.888064 | orchestrator | 2026-04-11 01:53:47.888071 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-11 01:53:47.967508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-11 01:53:47.967612 | orchestrator | 2026-04-11 01:53:47.967633 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-11 01:53:48.018828 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:48.018967 | orchestrator | 2026-04-11 01:53:48.018990 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-11 01:53:48.812426 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-11 01:53:48.812513 | orchestrator | 2026-04-11 01:53:48.812523 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-11 01:53:48.908331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-11 01:53:48.908415 | orchestrator | 2026-04-11 01:53:48.908427 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-11 01:53:49.679097 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:49.679205 | orchestrator | 2026-04-11 01:53:49.679230 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-11 01:53:50.351259 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:50.351381 | orchestrator | 2026-04-11 01:53:50.351391 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-11 01:53:50.415780 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:53:50.415907 | orchestrator | 2026-04-11 01:53:50.415928 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-11 01:53:50.483937 | orchestrator | ok: [testbed-manager] 2026-04-11 01:53:50.484040 | orchestrator | 2026-04-11 01:53:50.484057 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-11 01:53:51.386630 | orchestrator | changed: [testbed-manager] 2026-04-11 01:53:51.386712 | orchestrator | 2026-04-11 01:53:51.386724 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-11 01:55:09.139832 | orchestrator | changed: [testbed-manager] 2026-04-11 01:55:09.139935 | orchestrator | 2026-04-11 01:55:09.139944 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-11 01:55:10.234533 | orchestrator | ok: [testbed-manager] 2026-04-11 01:55:10.234637 | orchestrator | 2026-04-11 01:55:10.234663 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-11 01:55:10.301402 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:55:10.301550 | orchestrator | 2026-04-11 01:55:10.301573 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-11 01:55:13.068660 | orchestrator | changed: [testbed-manager] 2026-04-11 01:55:13.068764 | orchestrator | 2026-04-11 01:55:13.068781 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-11 01:55:13.121156 | orchestrator | ok: [testbed-manager] 2026-04-11 01:55:13.121257 | orchestrator | 2026-04-11 01:55:13.121273 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-11 01:55:13.121285 | orchestrator | 2026-04-11 01:55:13.121296 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-11 01:55:13.288083 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:55:13.288214 | orchestrator | 2026-04-11 01:55:13.288248 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-11 01:56:13.343876 | orchestrator | Pausing for 60 seconds 2026-04-11 01:56:13.343974 | orchestrator | changed: [testbed-manager] 2026-04-11 01:56:13.343985 | orchestrator | 2026-04-11 01:56:13.343993 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-11 01:56:16.759796 | orchestrator | changed: [testbed-manager] 2026-04-11 01:56:16.759913 | orchestrator | 2026-04-11 01:56:16.759931 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-11 01:57:19.133501 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-11 01:57:19.133613 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-11 01:57:19.133740 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-11 01:57:19.133750 | orchestrator | changed: [testbed-manager] 2026-04-11 01:57:19.133758 | orchestrator | 2026-04-11 01:57:19.133765 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-11 01:57:31.348504 | orchestrator | changed: [testbed-manager] 2026-04-11 01:57:31.348617 | orchestrator | 2026-04-11 01:57:31.348638 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-11 01:57:31.437538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-11 01:57:31.437636 | orchestrator | 2026-04-11 01:57:31.437699 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-11 01:57:31.437713 | orchestrator | 2026-04-11 01:57:31.437724 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-11 01:57:31.496173 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:57:31.496298 | orchestrator | 2026-04-11 01:57:31.496329 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-11 01:57:31.589288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-11 01:57:31.589393 | orchestrator | 2026-04-11 01:57:31.589434 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-11 01:57:32.449490 | orchestrator | changed: [testbed-manager] 2026-04-11 01:57:32.449576 | orchestrator | 2026-04-11 01:57:32.449586 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-11 01:57:35.860450 | orchestrator | ok: [testbed-manager] 2026-04-11 01:57:35.860552 | orchestrator | 2026-04-11 01:57:35.860571 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-11 01:57:35.929957 | orchestrator | ok: [testbed-manager] => { 2026-04-11 01:57:35.930103 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-11 01:57:35.930119 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-11 01:57:35.930131 | orchestrator | "Checking running containers against expected versions...", 2026-04-11 01:57:35.930143 | orchestrator | "", 2026-04-11 01:57:35.930155 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-11 01:57:35.930165 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-11 01:57:35.930177 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930188 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-11 01:57:35.930199 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930210 | orchestrator | "", 2026-04-11 01:57:35.930220 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-11 01:57:35.930257 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-11 01:57:35.930270 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930282 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-11 01:57:35.930292 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930303 | orchestrator | "", 2026-04-11 01:57:35.930314 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-11 01:57:35.930324 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-11 01:57:35.930335 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930345 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-11 01:57:35.930356 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930367 | orchestrator | "", 2026-04-11 01:57:35.930377 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-11 01:57:35.930388 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-11 01:57:35.930398 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930409 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-11 01:57:35.930420 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930430 | orchestrator | "", 2026-04-11 01:57:35.930443 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-11 01:57:35.930453 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-11 01:57:35.930464 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930475 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-11 01:57:35.930485 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930496 | orchestrator | "", 2026-04-11 01:57:35.930508 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-11 01:57:35.930518 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930529 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930540 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930551 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930561 | orchestrator | "", 2026-04-11 01:57:35.930572 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-11 01:57:35.930583 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-11 01:57:35.930593 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930604 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-11 01:57:35.930615 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930625 | orchestrator | "", 2026-04-11 01:57:35.930636 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-11 01:57:35.930646 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-11 01:57:35.930679 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930690 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-11 01:57:35.930700 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930710 | orchestrator | "", 2026-04-11 01:57:35.930720 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-11 01:57:35.930731 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-11 01:57:35.930742 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930753 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-11 01:57:35.930764 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930775 | orchestrator | "", 2026-04-11 01:57:35.930786 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-11 01:57:35.930797 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-11 01:57:35.930809 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930819 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-11 01:57:35.930826 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930833 | orchestrator | "", 2026-04-11 01:57:35.930839 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-11 01:57:35.930846 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930859 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930866 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930873 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930879 | orchestrator | "", 2026-04-11 01:57:35.930886 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-11 01:57:35.930893 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930899 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930906 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930913 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930920 | orchestrator | "", 2026-04-11 01:57:35.930927 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-11 01:57:35.930934 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930940 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930947 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930954 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.930961 | orchestrator | "", 2026-04-11 01:57:35.930967 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-11 01:57:35.930974 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.930980 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.930987 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.931010 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.931017 | orchestrator | "", 2026-04-11 01:57:35.931024 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-11 01:57:35.931031 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.931045 | orchestrator | " Enabled: true", 2026-04-11 01:57:35.931053 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-11 01:57:35.931059 | orchestrator | " Status: ✅ MATCH", 2026-04-11 01:57:35.931066 | orchestrator | "", 2026-04-11 01:57:35.931073 | orchestrator | "=== Summary ===", 2026-04-11 01:57:35.931080 | orchestrator | "Errors (version mismatches): 0", 2026-04-11 01:57:35.931086 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-11 01:57:35.931093 | orchestrator | "", 2026-04-11 01:57:35.931100 | orchestrator | "✅ All running containers match expected versions!" 2026-04-11 01:57:35.931107 | orchestrator | ] 2026-04-11 01:57:35.931113 | orchestrator | } 2026-04-11 01:57:35.931120 | orchestrator | 2026-04-11 01:57:35.931127 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-11 01:57:35.997232 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:57:35.997349 | orchestrator | 2026-04-11 01:57:35.997368 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:57:35.997382 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-11 01:57:35.997393 | orchestrator | 2026-04-11 01:57:36.136990 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 01:57:36.137061 | orchestrator | + deactivate 2026-04-11 01:57:36.137069 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-11 01:57:36.137076 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 01:57:36.137081 | orchestrator | + export PATH 2026-04-11 01:57:36.137086 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-11 01:57:36.137091 | orchestrator | + '[' -n '' ']' 2026-04-11 01:57:36.137096 | orchestrator | + hash -r 2026-04-11 01:57:36.137101 | orchestrator | + '[' -n '' ']' 2026-04-11 01:57:36.137106 | orchestrator | + unset VIRTUAL_ENV 2026-04-11 01:57:36.137111 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-11 01:57:36.137115 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-11 01:57:36.137120 | orchestrator | + unset -f deactivate 2026-04-11 01:57:36.137324 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-11 01:57:36.147054 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 01:57:36.147128 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-11 01:57:36.147162 | orchestrator | + local max_attempts=60 2026-04-11 01:57:36.147169 | orchestrator | + local name=ceph-ansible 2026-04-11 01:57:36.147174 | orchestrator | + local attempt_num=1 2026-04-11 01:57:36.148250 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 01:57:36.184302 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 01:57:36.184375 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-11 01:57:36.184383 | orchestrator | + local max_attempts=60 2026-04-11 01:57:36.184390 | orchestrator | + local name=kolla-ansible 2026-04-11 01:57:36.184448 | orchestrator | + local attempt_num=1 2026-04-11 01:57:36.186114 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-11 01:57:36.232524 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 01:57:36.232605 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-11 01:57:36.232620 | orchestrator | + local max_attempts=60 2026-04-11 01:57:36.232632 | orchestrator | + local name=osism-ansible 2026-04-11 01:57:36.232888 | orchestrator | + local attempt_num=1 2026-04-11 01:57:36.233853 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-11 01:57:36.276953 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 01:57:36.277026 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-11 01:57:36.277034 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-11 01:57:37.031048 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-11 01:57:37.229228 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-11 01:57:37.229320 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-11 01:57:37.229331 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-11 01:57:37.229339 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-11 01:57:37.229348 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-11 01:57:37.229373 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-11 01:57:37.229381 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-11 01:57:37.229388 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-11 01:57:37.229394 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-11 01:57:37.229401 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-11 01:57:37.229407 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-11 01:57:37.229414 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-11 01:57:37.229421 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-11 01:57:37.229445 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-11 01:57:37.229451 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-11 01:57:37.229458 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-11 01:57:37.236864 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-11 01:57:37.296381 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 01:57:37.296524 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-11 01:57:37.301041 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-11 01:57:49.769133 | orchestrator | 2026-04-11 01:57:49 | INFO  | Task 92498594-89f7-40f0-81d6-dda356a42013 (resolvconf) was prepared for execution. 2026-04-11 01:57:49.769224 | orchestrator | 2026-04-11 01:57:49 | INFO  | It takes a moment until task 92498594-89f7-40f0-81d6-dda356a42013 (resolvconf) has been started and output is visible here. 2026-04-11 01:58:05.399438 | orchestrator | 2026-04-11 01:58:05.399533 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-11 01:58:05.399543 | orchestrator | 2026-04-11 01:58:05.399550 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 01:58:05.399558 | orchestrator | Saturday 11 April 2026 01:57:54 +0000 (0:00:00.189) 0:00:00.189 ******** 2026-04-11 01:58:05.399565 | orchestrator | ok: [testbed-manager] 2026-04-11 01:58:05.399572 | orchestrator | 2026-04-11 01:58:05.399579 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-11 01:58:05.399586 | orchestrator | Saturday 11 April 2026 01:57:58 +0000 (0:00:04.133) 0:00:04.323 ******** 2026-04-11 01:58:05.399593 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:58:05.399600 | orchestrator | 2026-04-11 01:58:05.399607 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-11 01:58:05.399613 | orchestrator | Saturday 11 April 2026 01:57:58 +0000 (0:00:00.071) 0:00:04.395 ******** 2026-04-11 01:58:05.399620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-11 01:58:05.399627 | orchestrator | 2026-04-11 01:58:05.399634 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-11 01:58:05.399640 | orchestrator | Saturday 11 April 2026 01:57:58 +0000 (0:00:00.106) 0:00:04.502 ******** 2026-04-11 01:58:05.399662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 01:58:05.399668 | orchestrator | 2026-04-11 01:58:05.399675 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-11 01:58:05.399681 | orchestrator | Saturday 11 April 2026 01:57:58 +0000 (0:00:00.092) 0:00:04.594 ******** 2026-04-11 01:58:05.399688 | orchestrator | ok: [testbed-manager] 2026-04-11 01:58:05.399732 | orchestrator | 2026-04-11 01:58:05.399739 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-11 01:58:05.399745 | orchestrator | Saturday 11 April 2026 01:58:00 +0000 (0:00:01.314) 0:00:05.909 ******** 2026-04-11 01:58:05.399751 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:58:05.399758 | orchestrator | 2026-04-11 01:58:05.399764 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-11 01:58:05.399770 | orchestrator | Saturday 11 April 2026 01:58:00 +0000 (0:00:00.068) 0:00:05.977 ******** 2026-04-11 01:58:05.399795 | orchestrator | ok: [testbed-manager] 2026-04-11 01:58:05.399801 | orchestrator | 2026-04-11 01:58:05.399808 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-11 01:58:05.399814 | orchestrator | Saturday 11 April 2026 01:58:00 +0000 (0:00:00.599) 0:00:06.577 ******** 2026-04-11 01:58:05.399820 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:58:05.399827 | orchestrator | 2026-04-11 01:58:05.399833 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-11 01:58:05.399840 | orchestrator | Saturday 11 April 2026 01:58:00 +0000 (0:00:00.085) 0:00:06.663 ******** 2026-04-11 01:58:05.399846 | orchestrator | changed: [testbed-manager] 2026-04-11 01:58:05.399853 | orchestrator | 2026-04-11 01:58:05.399859 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-11 01:58:05.399865 | orchestrator | Saturday 11 April 2026 01:58:01 +0000 (0:00:00.587) 0:00:07.251 ******** 2026-04-11 01:58:05.399871 | orchestrator | changed: [testbed-manager] 2026-04-11 01:58:05.399878 | orchestrator | 2026-04-11 01:58:05.399884 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-11 01:58:05.399890 | orchestrator | Saturday 11 April 2026 01:58:02 +0000 (0:00:01.217) 0:00:08.468 ******** 2026-04-11 01:58:05.399897 | orchestrator | ok: [testbed-manager] 2026-04-11 01:58:05.399903 | orchestrator | 2026-04-11 01:58:05.399909 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-11 01:58:05.399915 | orchestrator | Saturday 11 April 2026 01:58:03 +0000 (0:00:01.088) 0:00:09.556 ******** 2026-04-11 01:58:05.399922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-11 01:58:05.399928 | orchestrator | 2026-04-11 01:58:05.399934 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-11 01:58:05.399940 | orchestrator | Saturday 11 April 2026 01:58:03 +0000 (0:00:00.096) 0:00:09.653 ******** 2026-04-11 01:58:05.399947 | orchestrator | changed: [testbed-manager] 2026-04-11 01:58:05.399953 | orchestrator | 2026-04-11 01:58:05.399959 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:58:05.399966 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 01:58:05.399973 | orchestrator | 2026-04-11 01:58:05.399979 | orchestrator | 2026-04-11 01:58:05.399985 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 01:58:05.399993 | orchestrator | Saturday 11 April 2026 01:58:05 +0000 (0:00:01.283) 0:00:10.937 ******** 2026-04-11 01:58:05.400001 | orchestrator | =============================================================================== 2026-04-11 01:58:05.400008 | orchestrator | Gathering Facts --------------------------------------------------------- 4.13s 2026-04-11 01:58:05.400016 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.31s 2026-04-11 01:58:05.400023 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.28s 2026-04-11 01:58:05.400030 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.22s 2026-04-11 01:58:05.400038 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.09s 2026-04-11 01:58:05.400045 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.60s 2026-04-11 01:58:05.400065 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2026-04-11 01:58:05.400072 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.11s 2026-04-11 01:58:05.400080 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-04-11 01:58:05.400087 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-04-11 01:58:05.400094 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-04-11 01:58:05.400101 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-04-11 01:58:05.400114 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-04-11 01:58:05.765274 | orchestrator | + osism apply sshconfig 2026-04-11 01:58:18.081561 | orchestrator | 2026-04-11 01:58:18 | INFO  | Task 0b763af9-965b-4e02-9017-8c6ca715c9a3 (sshconfig) was prepared for execution. 2026-04-11 01:58:18.081674 | orchestrator | 2026-04-11 01:58:18 | INFO  | It takes a moment until task 0b763af9-965b-4e02-9017-8c6ca715c9a3 (sshconfig) has been started and output is visible here. 2026-04-11 01:58:31.156174 | orchestrator | 2026-04-11 01:58:31.156301 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-11 01:58:31.156324 | orchestrator | 2026-04-11 01:58:31.156364 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-11 01:58:31.156381 | orchestrator | Saturday 11 April 2026 01:58:22 +0000 (0:00:00.170) 0:00:00.170 ******** 2026-04-11 01:58:31.156398 | orchestrator | ok: [testbed-manager] 2026-04-11 01:58:31.156417 | orchestrator | 2026-04-11 01:58:31.156434 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-11 01:58:31.156451 | orchestrator | Saturday 11 April 2026 01:58:23 +0000 (0:00:00.594) 0:00:00.764 ******** 2026-04-11 01:58:31.156466 | orchestrator | changed: [testbed-manager] 2026-04-11 01:58:31.156483 | orchestrator | 2026-04-11 01:58:31.156499 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-11 01:58:31.156516 | orchestrator | Saturday 11 April 2026 01:58:23 +0000 (0:00:00.579) 0:00:01.344 ******** 2026-04-11 01:58:31.156533 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-11 01:58:31.156550 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-11 01:58:31.156566 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-11 01:58:31.156582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-11 01:58:31.156598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-11 01:58:31.156616 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-11 01:58:31.156632 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-11 01:58:31.156646 | orchestrator | 2026-04-11 01:58:31.156664 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-11 01:58:31.156681 | orchestrator | Saturday 11 April 2026 01:58:30 +0000 (0:00:06.313) 0:00:07.658 ******** 2026-04-11 01:58:31.156698 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:58:31.156716 | orchestrator | 2026-04-11 01:58:31.156763 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-11 01:58:31.156781 | orchestrator | Saturday 11 April 2026 01:58:30 +0000 (0:00:00.108) 0:00:07.766 ******** 2026-04-11 01:58:31.156798 | orchestrator | changed: [testbed-manager] 2026-04-11 01:58:31.156815 | orchestrator | 2026-04-11 01:58:31.156831 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:58:31.156848 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 01:58:31.156866 | orchestrator | 2026-04-11 01:58:31.156881 | orchestrator | 2026-04-11 01:58:31.156897 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 01:58:31.156914 | orchestrator | Saturday 11 April 2026 01:58:30 +0000 (0:00:00.633) 0:00:08.399 ******** 2026-04-11 01:58:31.156929 | orchestrator | =============================================================================== 2026-04-11 01:58:31.156942 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.31s 2026-04-11 01:58:31.156955 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2026-04-11 01:58:31.156971 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2026-04-11 01:58:31.156988 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.58s 2026-04-11 01:58:31.157003 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.11s 2026-04-11 01:58:31.522242 | orchestrator | + osism apply known-hosts 2026-04-11 01:58:43.784888 | orchestrator | 2026-04-11 01:58:43 | INFO  | Task 6af064e7-87b9-40e2-a979-08458e685c03 (known-hosts) was prepared for execution. 2026-04-11 01:58:43.785039 | orchestrator | 2026-04-11 01:58:43 | INFO  | It takes a moment until task 6af064e7-87b9-40e2-a979-08458e685c03 (known-hosts) has been started and output is visible here. 2026-04-11 01:59:02.186338 | orchestrator | 2026-04-11 01:59:02.186436 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-11 01:59:02.186448 | orchestrator | 2026-04-11 01:59:02.186458 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-11 01:59:02.186467 | orchestrator | Saturday 11 April 2026 01:58:48 +0000 (0:00:00.178) 0:00:00.178 ******** 2026-04-11 01:59:02.186476 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-11 01:59:02.186485 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-11 01:59:02.186493 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-11 01:59:02.186501 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-11 01:59:02.186509 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-11 01:59:02.186517 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-11 01:59:02.186525 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-11 01:59:02.186532 | orchestrator | 2026-04-11 01:59:02.186541 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-11 01:59:02.186550 | orchestrator | Saturday 11 April 2026 01:58:54 +0000 (0:00:06.257) 0:00:06.436 ******** 2026-04-11 01:59:02.186559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-11 01:59:02.186569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-11 01:59:02.186577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-11 01:59:02.186585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-11 01:59:02.186593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-11 01:59:02.186610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-11 01:59:02.186618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-11 01:59:02.186626 | orchestrator | 2026-04-11 01:59:02.186634 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:02.186642 | orchestrator | Saturday 11 April 2026 01:58:54 +0000 (0:00:00.184) 0:00:06.621 ******** 2026-04-11 01:59:02.186651 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMm3LPQEIpk9qOtx0gO9wNKKrzpLGZXdDcCPcNyhR39BzfK3amSE6UX53yRilBi3kVpJdUEWuj2g2Nm2F7s0Hc=) 2026-04-11 01:59:02.186667 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChH3Yv5PWZterJmCSjFn2JwPnF3O36lUIETjFRJXo64D/WBoIYKsgxfirHsDMBl8B0JZrPSN9ST0w53j1xdc2zVyRWxGYGbn1cwkC7K4/DL59HbdCTb9gzowvcIyLBk6hcCHVfk0rD6OA13NzPUyH3ijFPBx9Fv2meOjhSMAt2iYMaGDMA9IvD7M6lxjdGbWPKdzVp4n+OelRnuz8LgpcfVCP0hxGsjctE5acVsA1plZL5r/TqRq0n0NSPodwu+tXP59hvcs/v/7bH9rjaOAbCE/HDnuTjoZoGKGQfN54JXU3yc/D9j04bjkAg8nN/ov9oFP9DawNPrSPkkLeIBXw8iKBA9vqiP1ZBUcqMbuB/ftHs7eOBZFSweMSUSqIHORZgXYJFfxFDXMktQBswye+a7CQyrq3+bL1AxfXfoloH0sUlCvDuPsKSEb6lKSbiw/15i2lbJwz51oXPAP/rWemsx+ewL2RgrmUyuua9n8pBAQu0e46jA48yJeBxNkpljT8=) 2026-04-11 01:59:02.186697 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBbSGbGmtyzy1a4bF59ZSm9IB2IV6LdTgfA+ojWiuqIF) 2026-04-11 01:59:02.186707 | orchestrator | 2026-04-11 01:59:02.186716 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:02.186724 | orchestrator | Saturday 11 April 2026 01:58:56 +0000 (0:00:01.266) 0:00:07.887 ******** 2026-04-11 01:59:02.186732 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLxOqNOB0Jbz7fnH4GdvuETxi3dSNKj2y1w/Yr1ifBd6iqMKO7x1QWPAJdLYzSDGUlVyDtNmzW/lCdF71mos47E=) 2026-04-11 01:59:02.186740 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHRzG0yEEh3z8g+bjZzY/Ojc42C0O2AI3H5X909j/TW8) 2026-04-11 01:59:02.186769 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6+ifkcLdWJHZKxzT9fBvkjE3DtGvKgDgwqdJ0xgk3/2EhPfbelDwLXgUlrQKu5oBWgwCCvv6/OAKs3Ak/AyGeGbsR6+1F+pgAxQtpuHV126uYzp6llKiOKgbcxDGjNJm1sHOGHxthU4rU8XvLew3dSQTHf5Q7iZWlioj7ifr8Mu8fWzPRZlNYeahyn5Zr0mwiRnPqsTRR8bnpSaOpf6lb9D1tJUSbhFzU07T/zCkQsWjLdwN/LW23hBQtfyJayPxk90pZYEqY4hvEPcepkc8afYdXl5pbb0IGLYLgIFREIgDXAjknU1HDPXB1xNvzA3DIMQ2Q+Q3Mfbj22x2dVeYrcwfADsmuw5GIynMO1dsTb5Jx7OmcUL5mbHTKF05wxyjwtP4W0FoRk5NSlJ3lRwdxEUtgAiPSS0CMYuMbT7nA4ViX6Sk5pCWcOCMGFFHssiouVHC+rWdnBUvaFn7F4VJ+bWD8yitWmLQGQcCkc2YTO0Mrl9GQYnctbliQ06BCSeM=) 2026-04-11 01:59:02.186807 | orchestrator | 2026-04-11 01:59:02.186815 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:02.186823 | orchestrator | Saturday 11 April 2026 01:58:57 +0000 (0:00:01.186) 0:00:09.073 ******** 2026-04-11 01:59:02.186831 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyEiTgg7ACO3SHyWpTKPEESUubhE/Y+cW1LQ9EtbZOfrc/od2gKIkOk+CBblunWXsRb9MZp81DOYGE0uioS+9DR6WvmWPaQYXf9rNi4KPQi4QaWB7oH7KB7MVvx2X57unHU6l2wsRSXfGf9g1J+hNMLYJ3QuMHH0p3bCQdKaO+Z4wHKavZvVXDGfkbGD7anhWUAXMbujdAxURNkr3YBdqGkvW9eQ7V6Hm2Je3hqG3HFvN6YCBWrHAITU7PE0gx92dux9bCVFw7qARcrHW3RIaAVrn5JTe2ZHCKYVyPhwRF8Uy7tAlm7UQqcVXOr5E8Mec3HiqIB8+Z6jputaOx12AqQziSGIeAkSLTbSkox4j/pwXwT7p52oOFRXFEGPFF9RjHjFVLyawtnYqmC/P8Q0NcM7x+lsyaxa58Ag6Gz/vYR1bxK1ZDWcu2KsJKZC53l2uriTUdSr3L4BWcjYVnd/C2wTjMvesuRIDPejL7Vk4t5IUHnA1gQWmd15i3hO6HxNU=) 2026-04-11 01:59:02.186840 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLj0xfsHQHQCqsjbGnDQmiQ95+PNj2wnl0BqM3tF4nzO66OyElwpQi9iAuDwRzg3X25Z/ggbeUCLugSLQlX0V90=) 2026-04-11 01:59:02.186849 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICr2+YAclnYhMFLq4UvA+iC4X4xjOhtvnWFzQkg7E+Nt) 2026-04-11 01:59:02.186857 | orchestrator | 2026-04-11 01:59:02.186865 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:02.186873 | orchestrator | Saturday 11 April 2026 01:58:58 +0000 (0:00:01.159) 0:00:10.232 ******** 2026-04-11 01:59:02.186881 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2mfZj/DpCKVEoStss5LOdRrQ1/EUjXe6LayHJ6zjTJgc0pbqy7FMaX0UeSyv+d5yxj38SDbyKLpAAOR5C6EjqwRepBgWpSO5+PbecJnWD/fUJmXKZBWUtZEgHpJWB40/4tpT3MfviJk8mt2+Qr5XTFh8c7NUAdMaGUxuBj10bTScIxWfvh4BwTvNIb+T0hGrQ6EzMm/jQ1t9dXQ/ZoJeGL3+6mHOomKpctuXyX8NIjdsFa681GO/wnVhYMmiet6CKsH8Uh015ntv8mS09C9MK7iWIRtpBjFpKrx5slpdSQLu+zyhgPZ5RruzCRREdsvgJ47UklWMIqZzT0NTTMrP/2GSW85jR1XYtltZMODllDmw3V6sg89IRDj4bxBxqmG7XJLOH0LXm2oYOlOdI/9ZEJLZJmIBN4VvWzGsf7V5U2sNEk2OTAknLQg3KL9PfvtxOcuPCCB4WJy3toa/4pUOm+QPPaVCkVOF7e0xV9VB4T2hDRuk3YzYOzdfW+erA4Gs=) 2026-04-11 01:59:02.186899 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF7WwWbBlCQucdFNsJBEHdpNWeIN1AD9CIFC5VzMpemxwYR5JQA8FsYN3KEgMxkpWi/wgbpQ7hFOiV3VdzsLcPs=) 2026-04-11 01:59:02.186908 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID/x4uMazyl2GTph75IDHau8Lak7LmAE+nz3NdGXM0Rs) 2026-04-11 01:59:02.186918 | orchestrator | 2026-04-11 01:59:02.186927 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:02.186936 | orchestrator | Saturday 11 April 2026 01:58:59 +0000 (0:00:01.160) 0:00:11.393 ******** 2026-04-11 01:59:02.187012 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpz831la6lI4h2zRBDY+HscEZ/o5rO7FQnPOXK+/ULzxf8WTKTJx8h2EB7Ttc1YJR1cTxS1x9IHvq9m6cT8W80UmgKMoAG8n9DKMYDP46R+JSbWsviLTwfAySOt8j6iYrALwf7O+r9cx5Mf6Sq9sEB+lBU16aSKP5fVBBod6XTgKcC8ilzvdntMe6nfqojji5Wv+er6PR9EGOMRGZSXUCf2Lif0CjPLVvIys5yJW7ajLK0i3znoNLo8OlPiUzMdTYk4N9lnuLar3nqLXxD4wbW0DbzIvCjqqWeKTVqV42s2DBoWRluUiC/CTGr9uCiZWCAQv/PyPGozQdnDtkp09fK8O0k503IRM4a/bA0YuFb1JI3Z5PZlwrRQrXmQSFP8rw95PQpCRIzH0ozeYTr0MzV2VAzs2JyP/M94iZvM+4sEyP/74cH3rGDLPDjdH0F/37vQ3etaL3qpgkVlzi+NzpKKifDz0acwh44VyFB//7a1p2YvMtUDxuVspXsM/wx6zM=) 2026-04-11 01:59:02.187022 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAh6nXxuzoiCRpISXSfX3/K0j9212apPicJ6n2daajDkKA2OK957SPMJ/uoae5hrkoBCstqFhiSvU38rci/xBM=) 2026-04-11 01:59:02.187032 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEHwJy5J4HRG5zfmAuyJGwe1/jfW7rVyp2ULh3JTkjpj) 2026-04-11 01:59:02.187042 | orchestrator | 2026-04-11 01:59:02.187051 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:02.187060 | orchestrator | Saturday 11 April 2026 01:59:00 +0000 (0:00:01.221) 0:00:12.615 ******** 2026-04-11 01:59:02.187075 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLIIuDTAjRDyyfl3omlj0Z/PW7l+nwF3LIcxkFIQ6wqgfoXYjKfBUXTJC5zYnlaxKi96Wu3U8JH9WZbLH+Yw3Oe6xBoZbYJkjirPm+Gzj3shR4MriSO7b8id2dMsshC0Xdqc26SUuMZs4hkDziF2AWbzUQxOQnDIduIgiODCphyA9KwqMuk0faa3nsJTsokEsC4dwy0NCkNCkEIX42p4Hy74EhtbbKOkBgGjFRkYIKAFPxcKlWtyU+aYahkuk2W8amtkeesuv85RO1PFdDd8xXLbvanYQruN3EoqIqaBTxP6l0AlMf0x1vrrdgjGJ5wTu6Rkg3l92rUkzcyZMlswkdfP0mhPN8TbhFMbznDzPbKfKY+PNC2xQLxkYxELlMTEbHw+cmiq3BRZXjQ9eshvyaleIOSFPe4IQHN5DVrJJbwsQDfeJEAwH231XNvup4p59B+KF9dUt5UyXWs5AjY/KWM94EAeo8tZFuqYKz0qtXdaWdx1sX4rhLZe4A+TgI9ck=) 2026-04-11 01:59:13.802306 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILxc3MKlSv+bQvyQlq/3Hw46hFdOvf/URY46KNSDraG+) 2026-04-11 01:59:13.802485 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIh/9ctp7KAmDdQkDKW0qmwmNSJtFEjYgm2+mvLJxiBXKaTHgb6etT6eG7ZMi7cRPlDbPr/yAt31u7vLY3K8qlg=) 2026-04-11 01:59:13.802535 | orchestrator | 2026-04-11 01:59:13.802559 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:13.802582 | orchestrator | Saturday 11 April 2026 01:59:02 +0000 (0:00:01.181) 0:00:13.796 ******** 2026-04-11 01:59:13.802607 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDL7NRYig0zX+eqrfCwPx7iZZMg+C8bPzG+NqzS1XCRdQloSgv181BW5Z1BK8YFgFLLHw3BGgByjXgllE3ie0aezvlqso3RboY/5bdiM+f58f8XNDurhN7qLbz4Puh46rSTFUXIIF89Ntc6dfBryoAXK83Jl4yUL9VLaTtEv0Vf38gNRQG4MFyAa7rTLTnA92CRP7Ln1OxCXYkJYhD9UJsJSr6R4WcK02grrOdQoatoSk/5jsUwFQr3ROxGJZX6WXNFxBjGjgqB4cpyKo0XjlwHddSJHJG0uMJnZcYJ1t+xc1KrIioCY4Ad8LF2BiI1Co+0zQN3z2dL/NkR6xMHqzPGzP4v+AkdKKCD0C+PvC4aFGWURsqHnAEEQeBdlc5tT7FTNlZJNLgXINTdeArNj9EN+0OW8Bma13q5Fcxw1INoKtOJz4lvrTnzM1Tat/52+VofYBark0nRuP2vlRpkaG3hxGaXtxiCdA4DHzUeE9olOraU3WaXfNXHmJQAZRrEMZs=) 2026-04-11 01:59:13.802633 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIMe5UcfgmScxvDS+/Y6TXVTjVjy/KISA74K+aSz1RS/5KOWdfhIXK1nTvYh8kLjRB7gDLt0GzP/gv8oiOs7vos=) 2026-04-11 01:59:13.802698 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFJtD12QNBSASB6PF17mOZ77SV5uNMvptXwdrhMx6PCw) 2026-04-11 01:59:13.802722 | orchestrator | 2026-04-11 01:59:13.802741 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-11 01:59:13.802761 | orchestrator | Saturday 11 April 2026 01:59:03 +0000 (0:00:01.118) 0:00:14.915 ******** 2026-04-11 01:59:13.802781 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-11 01:59:13.802891 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-11 01:59:13.802910 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-11 01:59:13.802928 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-11 01:59:13.802947 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-11 01:59:13.802964 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-11 01:59:13.802981 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-11 01:59:13.802999 | orchestrator | 2026-04-11 01:59:13.803018 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-11 01:59:13.803039 | orchestrator | Saturday 11 April 2026 01:59:08 +0000 (0:00:05.614) 0:00:20.529 ******** 2026-04-11 01:59:13.803059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-11 01:59:13.803079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-11 01:59:13.803096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-11 01:59:13.803114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-11 01:59:13.803134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-11 01:59:13.803153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-11 01:59:13.803173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-11 01:59:13.803191 | orchestrator | 2026-04-11 01:59:13.803208 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:13.803220 | orchestrator | Saturday 11 April 2026 01:59:09 +0000 (0:00:00.184) 0:00:20.714 ******** 2026-04-11 01:59:13.803231 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBbSGbGmtyzy1a4bF59ZSm9IB2IV6LdTgfA+ojWiuqIF) 2026-04-11 01:59:13.803282 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChH3Yv5PWZterJmCSjFn2JwPnF3O36lUIETjFRJXo64D/WBoIYKsgxfirHsDMBl8B0JZrPSN9ST0w53j1xdc2zVyRWxGYGbn1cwkC7K4/DL59HbdCTb9gzowvcIyLBk6hcCHVfk0rD6OA13NzPUyH3ijFPBx9Fv2meOjhSMAt2iYMaGDMA9IvD7M6lxjdGbWPKdzVp4n+OelRnuz8LgpcfVCP0hxGsjctE5acVsA1plZL5r/TqRq0n0NSPodwu+tXP59hvcs/v/7bH9rjaOAbCE/HDnuTjoZoGKGQfN54JXU3yc/D9j04bjkAg8nN/ov9oFP9DawNPrSPkkLeIBXw8iKBA9vqiP1ZBUcqMbuB/ftHs7eOBZFSweMSUSqIHORZgXYJFfxFDXMktQBswye+a7CQyrq3+bL1AxfXfoloH0sUlCvDuPsKSEb6lKSbiw/15i2lbJwz51oXPAP/rWemsx+ewL2RgrmUyuua9n8pBAQu0e46jA48yJeBxNkpljT8=) 2026-04-11 01:59:13.803309 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMMm3LPQEIpk9qOtx0gO9wNKKrzpLGZXdDcCPcNyhR39BzfK3amSE6UX53yRilBi3kVpJdUEWuj2g2Nm2F7s0Hc=) 2026-04-11 01:59:13.803336 | orchestrator | 2026-04-11 01:59:13.803349 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:13.803360 | orchestrator | Saturday 11 April 2026 01:59:10 +0000 (0:00:01.170) 0:00:21.884 ******** 2026-04-11 01:59:13.803377 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6+ifkcLdWJHZKxzT9fBvkjE3DtGvKgDgwqdJ0xgk3/2EhPfbelDwLXgUlrQKu5oBWgwCCvv6/OAKs3Ak/AyGeGbsR6+1F+pgAxQtpuHV126uYzp6llKiOKgbcxDGjNJm1sHOGHxthU4rU8XvLew3dSQTHf5Q7iZWlioj7ifr8Mu8fWzPRZlNYeahyn5Zr0mwiRnPqsTRR8bnpSaOpf6lb9D1tJUSbhFzU07T/zCkQsWjLdwN/LW23hBQtfyJayPxk90pZYEqY4hvEPcepkc8afYdXl5pbb0IGLYLgIFREIgDXAjknU1HDPXB1xNvzA3DIMQ2Q+Q3Mfbj22x2dVeYrcwfADsmuw5GIynMO1dsTb5Jx7OmcUL5mbHTKF05wxyjwtP4W0FoRk5NSlJ3lRwdxEUtgAiPSS0CMYuMbT7nA4ViX6Sk5pCWcOCMGFFHssiouVHC+rWdnBUvaFn7F4VJ+bWD8yitWmLQGQcCkc2YTO0Mrl9GQYnctbliQ06BCSeM=) 2026-04-11 01:59:13.803388 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLxOqNOB0Jbz7fnH4GdvuETxi3dSNKj2y1w/Yr1ifBd6iqMKO7x1QWPAJdLYzSDGUlVyDtNmzW/lCdF71mos47E=) 2026-04-11 01:59:13.803400 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHRzG0yEEh3z8g+bjZzY/Ojc42C0O2AI3H5X909j/TW8) 2026-04-11 01:59:13.803411 | orchestrator | 2026-04-11 01:59:13.803422 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:13.803432 | orchestrator | Saturday 11 April 2026 01:59:11 +0000 (0:00:01.132) 0:00:23.017 ******** 2026-04-11 01:59:13.803443 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLj0xfsHQHQCqsjbGnDQmiQ95+PNj2wnl0BqM3tF4nzO66OyElwpQi9iAuDwRzg3X25Z/ggbeUCLugSLQlX0V90=) 2026-04-11 01:59:13.803455 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyEiTgg7ACO3SHyWpTKPEESUubhE/Y+cW1LQ9EtbZOfrc/od2gKIkOk+CBblunWXsRb9MZp81DOYGE0uioS+9DR6WvmWPaQYXf9rNi4KPQi4QaWB7oH7KB7MVvx2X57unHU6l2wsRSXfGf9g1J+hNMLYJ3QuMHH0p3bCQdKaO+Z4wHKavZvVXDGfkbGD7anhWUAXMbujdAxURNkr3YBdqGkvW9eQ7V6Hm2Je3hqG3HFvN6YCBWrHAITU7PE0gx92dux9bCVFw7qARcrHW3RIaAVrn5JTe2ZHCKYVyPhwRF8Uy7tAlm7UQqcVXOr5E8Mec3HiqIB8+Z6jputaOx12AqQziSGIeAkSLTbSkox4j/pwXwT7p52oOFRXFEGPFF9RjHjFVLyawtnYqmC/P8Q0NcM7x+lsyaxa58Ag6Gz/vYR1bxK1ZDWcu2KsJKZC53l2uriTUdSr3L4BWcjYVnd/C2wTjMvesuRIDPejL7Vk4t5IUHnA1gQWmd15i3hO6HxNU=) 2026-04-11 01:59:13.803466 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICr2+YAclnYhMFLq4UvA+iC4X4xjOhtvnWFzQkg7E+Nt) 2026-04-11 01:59:13.803477 | orchestrator | 2026-04-11 01:59:13.803488 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:13.803499 | orchestrator | Saturday 11 April 2026 01:59:12 +0000 (0:00:01.202) 0:00:24.219 ******** 2026-04-11 01:59:13.803510 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF7WwWbBlCQucdFNsJBEHdpNWeIN1AD9CIFC5VzMpemxwYR5JQA8FsYN3KEgMxkpWi/wgbpQ7hFOiV3VdzsLcPs=) 2026-04-11 01:59:13.803521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2mfZj/DpCKVEoStss5LOdRrQ1/EUjXe6LayHJ6zjTJgc0pbqy7FMaX0UeSyv+d5yxj38SDbyKLpAAOR5C6EjqwRepBgWpSO5+PbecJnWD/fUJmXKZBWUtZEgHpJWB40/4tpT3MfviJk8mt2+Qr5XTFh8c7NUAdMaGUxuBj10bTScIxWfvh4BwTvNIb+T0hGrQ6EzMm/jQ1t9dXQ/ZoJeGL3+6mHOomKpctuXyX8NIjdsFa681GO/wnVhYMmiet6CKsH8Uh015ntv8mS09C9MK7iWIRtpBjFpKrx5slpdSQLu+zyhgPZ5RruzCRREdsvgJ47UklWMIqZzT0NTTMrP/2GSW85jR1XYtltZMODllDmw3V6sg89IRDj4bxBxqmG7XJLOH0LXm2oYOlOdI/9ZEJLZJmIBN4VvWzGsf7V5U2sNEk2OTAknLQg3KL9PfvtxOcuPCCB4WJy3toa/4pUOm+QPPaVCkVOF7e0xV9VB4T2hDRuk3YzYOzdfW+erA4Gs=) 2026-04-11 01:59:13.803544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID/x4uMazyl2GTph75IDHau8Lak7LmAE+nz3NdGXM0Rs) 2026-04-11 01:59:18.741492 | orchestrator | 2026-04-11 01:59:18.741576 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:18.741587 | orchestrator | Saturday 11 April 2026 01:59:13 +0000 (0:00:01.189) 0:00:25.408 ******** 2026-04-11 01:59:18.741594 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEHwJy5J4HRG5zfmAuyJGwe1/jfW7rVyp2ULh3JTkjpj) 2026-04-11 01:59:18.741604 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpz831la6lI4h2zRBDY+HscEZ/o5rO7FQnPOXK+/ULzxf8WTKTJx8h2EB7Ttc1YJR1cTxS1x9IHvq9m6cT8W80UmgKMoAG8n9DKMYDP46R+JSbWsviLTwfAySOt8j6iYrALwf7O+r9cx5Mf6Sq9sEB+lBU16aSKP5fVBBod6XTgKcC8ilzvdntMe6nfqojji5Wv+er6PR9EGOMRGZSXUCf2Lif0CjPLVvIys5yJW7ajLK0i3znoNLo8OlPiUzMdTYk4N9lnuLar3nqLXxD4wbW0DbzIvCjqqWeKTVqV42s2DBoWRluUiC/CTGr9uCiZWCAQv/PyPGozQdnDtkp09fK8O0k503IRM4a/bA0YuFb1JI3Z5PZlwrRQrXmQSFP8rw95PQpCRIzH0ozeYTr0MzV2VAzs2JyP/M94iZvM+4sEyP/74cH3rGDLPDjdH0F/37vQ3etaL3qpgkVlzi+NzpKKifDz0acwh44VyFB//7a1p2YvMtUDxuVspXsM/wx6zM=) 2026-04-11 01:59:18.741615 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAh6nXxuzoiCRpISXSfX3/K0j9212apPicJ6n2daajDkKA2OK957SPMJ/uoae5hrkoBCstqFhiSvU38rci/xBM=) 2026-04-11 01:59:18.741623 | orchestrator | 2026-04-11 01:59:18.741629 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:18.741635 | orchestrator | Saturday 11 April 2026 01:59:14 +0000 (0:00:01.182) 0:00:26.591 ******** 2026-04-11 01:59:18.741641 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIh/9ctp7KAmDdQkDKW0qmwmNSJtFEjYgm2+mvLJxiBXKaTHgb6etT6eG7ZMi7cRPlDbPr/yAt31u7vLY3K8qlg=) 2026-04-11 01:59:18.741647 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILxc3MKlSv+bQvyQlq/3Hw46hFdOvf/URY46KNSDraG+) 2026-04-11 01:59:18.741653 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLIIuDTAjRDyyfl3omlj0Z/PW7l+nwF3LIcxkFIQ6wqgfoXYjKfBUXTJC5zYnlaxKi96Wu3U8JH9WZbLH+Yw3Oe6xBoZbYJkjirPm+Gzj3shR4MriSO7b8id2dMsshC0Xdqc26SUuMZs4hkDziF2AWbzUQxOQnDIduIgiODCphyA9KwqMuk0faa3nsJTsokEsC4dwy0NCkNCkEIX42p4Hy74EhtbbKOkBgGjFRkYIKAFPxcKlWtyU+aYahkuk2W8amtkeesuv85RO1PFdDd8xXLbvanYQruN3EoqIqaBTxP6l0AlMf0x1vrrdgjGJ5wTu6Rkg3l92rUkzcyZMlswkdfP0mhPN8TbhFMbznDzPbKfKY+PNC2xQLxkYxELlMTEbHw+cmiq3BRZXjQ9eshvyaleIOSFPe4IQHN5DVrJJbwsQDfeJEAwH231XNvup4p59B+KF9dUt5UyXWs5AjY/KWM94EAeo8tZFuqYKz0qtXdaWdx1sX4rhLZe4A+TgI9ck=) 2026-04-11 01:59:18.741659 | orchestrator | 2026-04-11 01:59:18.741665 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-11 01:59:18.741671 | orchestrator | Saturday 11 April 2026 01:59:16 +0000 (0:00:01.181) 0:00:27.772 ******** 2026-04-11 01:59:18.741677 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIMe5UcfgmScxvDS+/Y6TXVTjVjy/KISA74K+aSz1RS/5KOWdfhIXK1nTvYh8kLjRB7gDLt0GzP/gv8oiOs7vos=) 2026-04-11 01:59:18.741698 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDL7NRYig0zX+eqrfCwPx7iZZMg+C8bPzG+NqzS1XCRdQloSgv181BW5Z1BK8YFgFLLHw3BGgByjXgllE3ie0aezvlqso3RboY/5bdiM+f58f8XNDurhN7qLbz4Puh46rSTFUXIIF89Ntc6dfBryoAXK83Jl4yUL9VLaTtEv0Vf38gNRQG4MFyAa7rTLTnA92CRP7Ln1OxCXYkJYhD9UJsJSr6R4WcK02grrOdQoatoSk/5jsUwFQr3ROxGJZX6WXNFxBjGjgqB4cpyKo0XjlwHddSJHJG0uMJnZcYJ1t+xc1KrIioCY4Ad8LF2BiI1Co+0zQN3z2dL/NkR6xMHqzPGzP4v+AkdKKCD0C+PvC4aFGWURsqHnAEEQeBdlc5tT7FTNlZJNLgXINTdeArNj9EN+0OW8Bma13q5Fcxw1INoKtOJz4lvrTnzM1Tat/52+VofYBark0nRuP2vlRpkaG3hxGaXtxiCdA4DHzUeE9olOraU3WaXfNXHmJQAZRrEMZs=) 2026-04-11 01:59:18.741705 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFJtD12QNBSASB6PF17mOZ77SV5uNMvptXwdrhMx6PCw) 2026-04-11 01:59:18.741711 | orchestrator | 2026-04-11 01:59:18.741716 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-11 01:59:18.741742 | orchestrator | Saturday 11 April 2026 01:59:17 +0000 (0:00:01.185) 0:00:28.958 ******** 2026-04-11 01:59:18.741749 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-11 01:59:18.741755 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-11 01:59:18.741761 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-11 01:59:18.741767 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-11 01:59:18.741772 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-11 01:59:18.741778 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-11 01:59:18.741784 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-11 01:59:18.741790 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:59:18.741950 | orchestrator | 2026-04-11 01:59:18.741996 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-11 01:59:18.742008 | orchestrator | Saturday 11 April 2026 01:59:17 +0000 (0:00:00.188) 0:00:29.147 ******** 2026-04-11 01:59:18.742094 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:59:18.742105 | orchestrator | 2026-04-11 01:59:18.742113 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-11 01:59:18.742119 | orchestrator | Saturday 11 April 2026 01:59:17 +0000 (0:00:00.071) 0:00:29.218 ******** 2026-04-11 01:59:18.742126 | orchestrator | skipping: [testbed-manager] 2026-04-11 01:59:18.742132 | orchestrator | 2026-04-11 01:59:18.742139 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-11 01:59:18.742146 | orchestrator | Saturday 11 April 2026 01:59:17 +0000 (0:00:00.069) 0:00:29.288 ******** 2026-04-11 01:59:18.742158 | orchestrator | changed: [testbed-manager] 2026-04-11 01:59:18.742165 | orchestrator | 2026-04-11 01:59:18.742171 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 01:59:18.742179 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 01:59:18.742187 | orchestrator | 2026-04-11 01:59:18.742193 | orchestrator | 2026-04-11 01:59:18.742201 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 01:59:18.742208 | orchestrator | Saturday 11 April 2026 01:59:18 +0000 (0:00:00.813) 0:00:30.101 ******** 2026-04-11 01:59:18.742246 | orchestrator | =============================================================================== 2026-04-11 01:59:18.742253 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.26s 2026-04-11 01:59:18.742260 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.61s 2026-04-11 01:59:18.742267 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2026-04-11 01:59:18.742274 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-04-11 01:59:18.742281 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-11 01:59:18.742287 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-11 01:59:18.742294 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-11 01:59:18.742301 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-04-11 01:59:18.742307 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-04-11 01:59:18.742314 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-04-11 01:59:18.742320 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-04-11 01:59:18.742326 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-04-11 01:59:18.742333 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-11 01:59:18.742340 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-11 01:59:18.742358 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-11 01:59:18.742367 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-11 01:59:18.742376 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.81s 2026-04-11 01:59:18.742385 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-04-11 01:59:18.742395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-04-11 01:59:18.742405 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-04-11 01:59:19.129630 | orchestrator | + osism apply squid 2026-04-11 01:59:31.393234 | orchestrator | 2026-04-11 01:59:31 | INFO  | Task b635f161-03ab-453a-b3a5-8c03f01ab88a (squid) was prepared for execution. 2026-04-11 01:59:31.393326 | orchestrator | 2026-04-11 01:59:31 | INFO  | It takes a moment until task b635f161-03ab-453a-b3a5-8c03f01ab88a (squid) has been started and output is visible here. 2026-04-11 02:01:27.959595 | orchestrator | 2026-04-11 02:01:27.959728 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-11 02:01:27.959754 | orchestrator | 2026-04-11 02:01:27.959770 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-11 02:01:27.959787 | orchestrator | Saturday 11 April 2026 01:59:36 +0000 (0:00:00.215) 0:00:00.216 ******** 2026-04-11 02:01:27.959804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 02:01:27.959822 | orchestrator | 2026-04-11 02:01:27.959838 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-11 02:01:27.959854 | orchestrator | Saturday 11 April 2026 01:59:36 +0000 (0:00:00.092) 0:00:00.308 ******** 2026-04-11 02:01:27.959872 | orchestrator | ok: [testbed-manager] 2026-04-11 02:01:27.959889 | orchestrator | 2026-04-11 02:01:27.959906 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-11 02:01:27.959924 | orchestrator | Saturday 11 April 2026 01:59:37 +0000 (0:00:01.677) 0:00:01.985 ******** 2026-04-11 02:01:27.959942 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-11 02:01:27.960005 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-11 02:01:27.960022 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-11 02:01:27.960036 | orchestrator | 2026-04-11 02:01:27.960050 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-11 02:01:27.960060 | orchestrator | Saturday 11 April 2026 01:59:39 +0000 (0:00:01.223) 0:00:03.208 ******** 2026-04-11 02:01:27.960070 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-11 02:01:27.960080 | orchestrator | 2026-04-11 02:01:27.960090 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-11 02:01:27.960100 | orchestrator | Saturday 11 April 2026 01:59:40 +0000 (0:00:01.147) 0:00:04.356 ******** 2026-04-11 02:01:27.960110 | orchestrator | ok: [testbed-manager] 2026-04-11 02:01:27.960120 | orchestrator | 2026-04-11 02:01:27.960129 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-11 02:01:27.960139 | orchestrator | Saturday 11 April 2026 01:59:40 +0000 (0:00:00.415) 0:00:04.772 ******** 2026-04-11 02:01:27.960150 | orchestrator | changed: [testbed-manager] 2026-04-11 02:01:27.960160 | orchestrator | 2026-04-11 02:01:27.960170 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-11 02:01:27.960180 | orchestrator | Saturday 11 April 2026 01:59:41 +0000 (0:00:00.989) 0:00:05.761 ******** 2026-04-11 02:01:27.960190 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-11 02:01:27.960205 | orchestrator | ok: [testbed-manager] 2026-04-11 02:01:27.960215 | orchestrator | 2026-04-11 02:01:27.960225 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-11 02:01:27.960256 | orchestrator | Saturday 11 April 2026 02:00:14 +0000 (0:00:32.984) 0:00:38.746 ******** 2026-04-11 02:01:27.960267 | orchestrator | changed: [testbed-manager] 2026-04-11 02:01:27.960277 | orchestrator | 2026-04-11 02:01:27.960287 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-11 02:01:27.960297 | orchestrator | Saturday 11 April 2026 02:00:26 +0000 (0:00:12.187) 0:00:50.933 ******** 2026-04-11 02:01:27.960307 | orchestrator | Pausing for 60 seconds 2026-04-11 02:01:27.960318 | orchestrator | changed: [testbed-manager] 2026-04-11 02:01:27.960328 | orchestrator | 2026-04-11 02:01:27.960338 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-11 02:01:27.960347 | orchestrator | Saturday 11 April 2026 02:01:26 +0000 (0:01:00.094) 0:01:51.028 ******** 2026-04-11 02:01:27.960357 | orchestrator | ok: [testbed-manager] 2026-04-11 02:01:27.960367 | orchestrator | 2026-04-11 02:01:27.960376 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-11 02:01:27.960386 | orchestrator | Saturday 11 April 2026 02:01:26 +0000 (0:00:00.067) 0:01:51.096 ******** 2026-04-11 02:01:27.960395 | orchestrator | changed: [testbed-manager] 2026-04-11 02:01:27.960405 | orchestrator | 2026-04-11 02:01:27.960415 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:01:27.960424 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:01:27.960434 | orchestrator | 2026-04-11 02:01:27.960444 | orchestrator | 2026-04-11 02:01:27.960454 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:01:27.960463 | orchestrator | Saturday 11 April 2026 02:01:27 +0000 (0:00:00.678) 0:01:51.774 ******** 2026-04-11 02:01:27.960473 | orchestrator | =============================================================================== 2026-04-11 02:01:27.960483 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-04-11 02:01:27.960492 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.98s 2026-04-11 02:01:27.960502 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.19s 2026-04-11 02:01:27.960525 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.68s 2026-04-11 02:01:27.960535 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-04-11 02:01:27.960545 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.15s 2026-04-11 02:01:27.960554 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.99s 2026-04-11 02:01:27.960564 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.68s 2026-04-11 02:01:27.960573 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.42s 2026-04-11 02:01:27.960583 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-04-11 02:01:27.960593 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-11 02:01:28.305886 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-11 02:01:28.306084 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-11 02:01:28.367674 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-11 02:01:28.367782 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-11 02:01:28.377469 | orchestrator | + set -e 2026-04-11 02:01:28.377566 | orchestrator | + NAMESPACE=kolla/release 2026-04-11 02:01:28.377583 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-11 02:01:28.383826 | orchestrator | ++ semver 9.5.0 9.0.0 2026-04-11 02:01:28.468391 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-11 02:01:28.469403 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-11 02:01:40.816274 | orchestrator | 2026-04-11 02:01:40 | INFO  | Task 3d84b8f6-5cd3-4509-8db3-5b5211cc823c (operator) was prepared for execution. 2026-04-11 02:01:40.816380 | orchestrator | 2026-04-11 02:01:40 | INFO  | It takes a moment until task 3d84b8f6-5cd3-4509-8db3-5b5211cc823c (operator) has been started and output is visible here. 2026-04-11 02:01:57.131786 | orchestrator | 2026-04-11 02:01:57.131892 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-11 02:01:57.131904 | orchestrator | 2026-04-11 02:01:57.131911 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 02:01:57.131919 | orchestrator | Saturday 11 April 2026 02:01:45 +0000 (0:00:00.147) 0:00:00.147 ******** 2026-04-11 02:01:57.131926 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:01:57.131934 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:01:57.131941 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:01:57.131948 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:01:57.131955 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:01:57.131962 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:01:57.131969 | orchestrator | 2026-04-11 02:01:57.131976 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-11 02:01:57.131983 | orchestrator | Saturday 11 April 2026 02:01:48 +0000 (0:00:03.264) 0:00:03.411 ******** 2026-04-11 02:01:57.132033 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:01:57.132042 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:01:57.132049 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:01:57.132057 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:01:57.132064 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:01:57.132071 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:01:57.132078 | orchestrator | 2026-04-11 02:01:57.132084 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-11 02:01:57.132090 | orchestrator | 2026-04-11 02:01:57.132097 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-11 02:01:57.132103 | orchestrator | Saturday 11 April 2026 02:01:49 +0000 (0:00:00.792) 0:00:04.204 ******** 2026-04-11 02:01:57.132109 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:01:57.132116 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:01:57.132123 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:01:57.132129 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:01:57.132136 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:01:57.132144 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:01:57.132151 | orchestrator | 2026-04-11 02:01:57.132158 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-11 02:01:57.132182 | orchestrator | Saturday 11 April 2026 02:01:49 +0000 (0:00:00.199) 0:00:04.403 ******** 2026-04-11 02:01:57.132189 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:01:57.132196 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:01:57.132203 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:01:57.132210 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:01:57.132217 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:01:57.132223 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:01:57.132230 | orchestrator | 2026-04-11 02:01:57.132236 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-11 02:01:57.132243 | orchestrator | Saturday 11 April 2026 02:01:49 +0000 (0:00:00.179) 0:00:04.582 ******** 2026-04-11 02:01:57.132250 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:01:57.132257 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:01:57.132264 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:01:57.132272 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:01:57.132279 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:01:57.132287 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:01:57.132294 | orchestrator | 2026-04-11 02:01:57.132300 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-11 02:01:57.132308 | orchestrator | Saturday 11 April 2026 02:01:50 +0000 (0:00:00.629) 0:00:05.212 ******** 2026-04-11 02:01:57.132314 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:01:57.132322 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:01:57.132330 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:01:57.132339 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:01:57.132347 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:01:57.132356 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:01:57.132389 | orchestrator | 2026-04-11 02:01:57.132401 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-11 02:01:57.132411 | orchestrator | Saturday 11 April 2026 02:01:51 +0000 (0:00:00.870) 0:00:06.083 ******** 2026-04-11 02:01:57.132419 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-11 02:01:57.132427 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-11 02:01:57.132434 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-11 02:01:57.132442 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-11 02:01:57.132449 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-11 02:01:57.132457 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-11 02:01:57.132466 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-11 02:01:57.132475 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-11 02:01:57.132484 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-11 02:01:57.132492 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-11 02:01:57.132500 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-11 02:01:57.132508 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-11 02:01:57.132517 | orchestrator | 2026-04-11 02:01:57.132526 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-11 02:01:57.132534 | orchestrator | Saturday 11 April 2026 02:01:52 +0000 (0:00:01.173) 0:00:07.256 ******** 2026-04-11 02:01:57.132542 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:01:57.132551 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:01:57.132559 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:01:57.132568 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:01:57.132577 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:01:57.132585 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:01:57.132594 | orchestrator | 2026-04-11 02:01:57.132603 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-11 02:01:57.132612 | orchestrator | Saturday 11 April 2026 02:01:53 +0000 (0:00:01.183) 0:00:08.440 ******** 2026-04-11 02:01:57.132621 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-11 02:01:57.132629 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-11 02:01:57.132638 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-11 02:01:57.132647 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 02:01:57.132674 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 02:01:57.132683 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 02:01:57.132692 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 02:01:57.132700 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 02:01:57.132708 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-11 02:01:57.132717 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-11 02:01:57.132724 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-11 02:01:57.132731 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-11 02:01:57.132738 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-11 02:01:57.132746 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-11 02:01:57.132753 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-11 02:01:57.132761 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-11 02:01:57.132768 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-11 02:01:57.132775 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-11 02:01:57.132782 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-11 02:01:57.132790 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-11 02:01:57.132804 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-11 02:01:57.132812 | orchestrator | 2026-04-11 02:01:57.132819 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-11 02:01:57.132827 | orchestrator | Saturday 11 April 2026 02:01:54 +0000 (0:00:01.260) 0:00:09.700 ******** 2026-04-11 02:01:57.132835 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:01:57.132842 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:01:57.132851 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:01:57.132858 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:01:57.132866 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:01:57.132873 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:01:57.132881 | orchestrator | 2026-04-11 02:01:57.132889 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-11 02:01:57.132896 | orchestrator | Saturday 11 April 2026 02:01:54 +0000 (0:00:00.190) 0:00:09.891 ******** 2026-04-11 02:01:57.132904 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:01:57.132911 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:01:57.132918 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:01:57.132926 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:01:57.132932 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:01:57.132940 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:01:57.132948 | orchestrator | 2026-04-11 02:01:57.132956 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-11 02:01:57.132964 | orchestrator | Saturday 11 April 2026 02:01:55 +0000 (0:00:00.206) 0:00:10.097 ******** 2026-04-11 02:01:57.132971 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:01:57.132979 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:01:57.132986 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:01:57.133012 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:01:57.133019 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:01:57.133027 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:01:57.133035 | orchestrator | 2026-04-11 02:01:57.133042 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-11 02:01:57.133049 | orchestrator | Saturday 11 April 2026 02:01:55 +0000 (0:00:00.699) 0:00:10.796 ******** 2026-04-11 02:01:57.133056 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:01:57.133064 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:01:57.133071 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:01:57.133079 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:01:57.133085 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:01:57.133091 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:01:57.133098 | orchestrator | 2026-04-11 02:01:57.133104 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-11 02:01:57.133110 | orchestrator | Saturday 11 April 2026 02:01:55 +0000 (0:00:00.210) 0:00:11.007 ******** 2026-04-11 02:01:57.133116 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 02:01:57.133132 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:01:57.133140 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-11 02:01:57.133147 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 02:01:57.133155 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-11 02:01:57.133163 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:01:57.133171 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 02:01:57.133178 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:01:57.133186 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:01:57.133193 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:01:57.133200 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 02:01:57.133207 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:01:57.133215 | orchestrator | 2026-04-11 02:01:57.133223 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-11 02:01:57.133230 | orchestrator | Saturday 11 April 2026 02:01:56 +0000 (0:00:00.752) 0:00:11.760 ******** 2026-04-11 02:01:57.133244 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:01:57.133252 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:01:57.133260 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:01:57.133267 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:01:57.133274 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:01:57.133282 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:01:57.133289 | orchestrator | 2026-04-11 02:01:57.133297 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-11 02:01:57.133304 | orchestrator | Saturday 11 April 2026 02:01:56 +0000 (0:00:00.199) 0:00:11.959 ******** 2026-04-11 02:01:57.133312 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:01:57.133319 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:01:57.133326 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:01:57.133333 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:01:57.133349 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:01:58.600984 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:01:58.601231 | orchestrator | 2026-04-11 02:01:58.601308 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-11 02:01:58.601326 | orchestrator | Saturday 11 April 2026 02:01:57 +0000 (0:00:00.191) 0:00:12.150 ******** 2026-04-11 02:01:58.601338 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:01:58.601348 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:01:58.601358 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:01:58.601367 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:01:58.601377 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:01:58.601387 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:01:58.601397 | orchestrator | 2026-04-11 02:01:58.601407 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-11 02:01:58.601418 | orchestrator | Saturday 11 April 2026 02:01:57 +0000 (0:00:00.203) 0:00:12.354 ******** 2026-04-11 02:01:58.601428 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:01:58.601438 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:01:58.601448 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:01:58.601458 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:01:58.601469 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:01:58.601480 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:01:58.601490 | orchestrator | 2026-04-11 02:01:58.601501 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-11 02:01:58.601512 | orchestrator | Saturday 11 April 2026 02:01:58 +0000 (0:00:00.676) 0:00:13.031 ******** 2026-04-11 02:01:58.601523 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:01:58.601535 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:01:58.601547 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:01:58.601559 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:01:58.601570 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:01:58.601581 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:01:58.601592 | orchestrator | 2026-04-11 02:01:58.601602 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:01:58.601638 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 02:01:58.601652 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 02:01:58.601663 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 02:01:58.601674 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 02:01:58.601684 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 02:01:58.601720 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 02:01:58.601730 | orchestrator | 2026-04-11 02:01:58.601741 | orchestrator | 2026-04-11 02:01:58.601750 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:01:58.601759 | orchestrator | Saturday 11 April 2026 02:01:58 +0000 (0:00:00.257) 0:00:13.288 ******** 2026-04-11 02:01:58.601770 | orchestrator | =============================================================================== 2026-04-11 02:01:58.601780 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2026-04-11 02:01:58.601790 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2026-04-11 02:01:58.601801 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-04-11 02:01:58.601810 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2026-04-11 02:01:58.601820 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2026-04-11 02:01:58.601829 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2026-04-11 02:01:58.601896 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2026-04-11 02:01:58.601908 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.70s 2026-04-11 02:01:58.601919 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2026-04-11 02:01:58.601930 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-04-11 02:01:58.601940 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2026-04-11 02:01:58.601951 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2026-04-11 02:01:58.601962 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-04-11 02:01:58.601972 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.20s 2026-04-11 02:01:58.601982 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2026-04-11 02:01:58.602125 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-04-11 02:01:58.602146 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-04-11 02:01:58.602158 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2026-04-11 02:01:58.602169 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-04-11 02:01:59.039306 | orchestrator | + osism apply --environment custom facts 2026-04-11 02:02:01.154947 | orchestrator | 2026-04-11 02:02:01 | INFO  | Trying to run play facts in environment custom 2026-04-11 02:02:11.294752 | orchestrator | 2026-04-11 02:02:11 | INFO  | Task 6fe9ca42-c3aa-4d62-97a1-789a755c73d0 (facts) was prepared for execution. 2026-04-11 02:02:11.294901 | orchestrator | 2026-04-11 02:02:11 | INFO  | It takes a moment until task 6fe9ca42-c3aa-4d62-97a1-789a755c73d0 (facts) has been started and output is visible here. 2026-04-11 02:02:54.026449 | orchestrator | 2026-04-11 02:02:54.026563 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-11 02:02:54.026579 | orchestrator | 2026-04-11 02:02:54.026591 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-11 02:02:54.026603 | orchestrator | Saturday 11 April 2026 02:02:15 +0000 (0:00:00.091) 0:00:00.091 ******** 2026-04-11 02:02:54.026615 | orchestrator | ok: [testbed-manager] 2026-04-11 02:02:54.026627 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:02:54.026638 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:02:54.026649 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:02:54.026660 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:02:54.026671 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:02:54.026707 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:02:54.026719 | orchestrator | 2026-04-11 02:02:54.026731 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-11 02:02:54.026742 | orchestrator | Saturday 11 April 2026 02:02:17 +0000 (0:00:01.481) 0:00:01.572 ******** 2026-04-11 02:02:54.026753 | orchestrator | ok: [testbed-manager] 2026-04-11 02:02:54.026764 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:02:54.026775 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:02:54.026785 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:02:54.026796 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:02:54.026807 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:02:54.026818 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:02:54.026829 | orchestrator | 2026-04-11 02:02:54.026840 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-11 02:02:54.026851 | orchestrator | 2026-04-11 02:02:54.026862 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-11 02:02:54.026873 | orchestrator | Saturday 11 April 2026 02:02:18 +0000 (0:00:01.174) 0:00:02.747 ******** 2026-04-11 02:02:54.026884 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:02:54.026895 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:02:54.026906 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:02:54.026917 | orchestrator | 2026-04-11 02:02:54.026929 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-11 02:02:54.026940 | orchestrator | Saturday 11 April 2026 02:02:18 +0000 (0:00:00.113) 0:00:02.861 ******** 2026-04-11 02:02:54.026951 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:02:54.026965 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:02:54.026979 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:02:54.026991 | orchestrator | 2026-04-11 02:02:54.027004 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-11 02:02:54.027017 | orchestrator | Saturday 11 April 2026 02:02:18 +0000 (0:00:00.213) 0:00:03.074 ******** 2026-04-11 02:02:54.027030 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:02:54.027040 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:02:54.027051 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:02:54.027109 | orchestrator | 2026-04-11 02:02:54.027120 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-11 02:02:54.027132 | orchestrator | Saturday 11 April 2026 02:02:19 +0000 (0:00:00.251) 0:00:03.325 ******** 2026-04-11 02:02:54.027145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:02:54.027157 | orchestrator | 2026-04-11 02:02:54.027169 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-11 02:02:54.027180 | orchestrator | Saturday 11 April 2026 02:02:19 +0000 (0:00:00.135) 0:00:03.461 ******** 2026-04-11 02:02:54.027191 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:02:54.027202 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:02:54.027212 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:02:54.027223 | orchestrator | 2026-04-11 02:02:54.027234 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-11 02:02:54.027245 | orchestrator | Saturday 11 April 2026 02:02:19 +0000 (0:00:00.424) 0:00:03.885 ******** 2026-04-11 02:02:54.027256 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:02:54.027267 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:02:54.027278 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:02:54.027289 | orchestrator | 2026-04-11 02:02:54.027300 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-11 02:02:54.027312 | orchestrator | Saturday 11 April 2026 02:02:19 +0000 (0:00:00.142) 0:00:04.028 ******** 2026-04-11 02:02:54.027323 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:02:54.027334 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:02:54.027345 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:02:54.027356 | orchestrator | 2026-04-11 02:02:54.027367 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-11 02:02:54.027387 | orchestrator | Saturday 11 April 2026 02:02:20 +0000 (0:00:01.048) 0:00:05.077 ******** 2026-04-11 02:02:54.027398 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:02:54.027409 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:02:54.027420 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:02:54.027431 | orchestrator | 2026-04-11 02:02:54.027442 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-11 02:02:54.027453 | orchestrator | Saturday 11 April 2026 02:02:21 +0000 (0:00:00.470) 0:00:05.548 ******** 2026-04-11 02:02:54.027464 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:02:54.027475 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:02:54.027486 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:02:54.027497 | orchestrator | 2026-04-11 02:02:54.027508 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-11 02:02:54.027567 | orchestrator | Saturday 11 April 2026 02:02:22 +0000 (0:00:01.049) 0:00:06.597 ******** 2026-04-11 02:02:54.027580 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:02:54.027591 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:02:54.027602 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:02:54.027613 | orchestrator | 2026-04-11 02:02:54.027624 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-11 02:02:54.027635 | orchestrator | Saturday 11 April 2026 02:02:38 +0000 (0:00:16.039) 0:00:22.637 ******** 2026-04-11 02:02:54.027646 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:02:54.027657 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:02:54.027668 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:02:54.027678 | orchestrator | 2026-04-11 02:02:54.027689 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-11 02:02:54.027719 | orchestrator | Saturday 11 April 2026 02:02:38 +0000 (0:00:00.120) 0:00:22.757 ******** 2026-04-11 02:02:54.027730 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:02:54.027742 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:02:54.027753 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:02:54.027764 | orchestrator | 2026-04-11 02:02:54.027775 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-11 02:02:54.027786 | orchestrator | Saturday 11 April 2026 02:02:45 +0000 (0:00:07.143) 0:00:29.900 ******** 2026-04-11 02:02:54.027797 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:02:54.027808 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:02:54.027820 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:02:54.027831 | orchestrator | 2026-04-11 02:02:54.027842 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-11 02:02:54.027853 | orchestrator | Saturday 11 April 2026 02:02:46 +0000 (0:00:00.444) 0:00:30.344 ******** 2026-04-11 02:02:54.027864 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-11 02:02:54.027876 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-11 02:02:54.027887 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-11 02:02:54.027898 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-11 02:02:54.027914 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-11 02:02:54.027926 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-11 02:02:54.027937 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-11 02:02:54.027948 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-11 02:02:54.027959 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-11 02:02:54.027970 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-11 02:02:54.027981 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-11 02:02:54.027992 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-11 02:02:54.028003 | orchestrator | 2026-04-11 02:02:54.028014 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-11 02:02:54.028034 | orchestrator | Saturday 11 April 2026 02:02:49 +0000 (0:00:03.258) 0:00:33.603 ******** 2026-04-11 02:02:54.028045 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:02:54.028075 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:02:54.028086 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:02:54.028097 | orchestrator | 2026-04-11 02:02:54.028109 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 02:02:54.028120 | orchestrator | 2026-04-11 02:02:54.028131 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 02:02:54.028142 | orchestrator | Saturday 11 April 2026 02:02:50 +0000 (0:00:01.222) 0:00:34.825 ******** 2026-04-11 02:02:54.028153 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:02:54.028164 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:02:54.028175 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:02:54.028186 | orchestrator | ok: [testbed-manager] 2026-04-11 02:02:54.028197 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:02:54.028208 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:02:54.028219 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:02:54.028230 | orchestrator | 2026-04-11 02:02:54.028242 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:02:54.028253 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:02:54.028265 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:02:54.028277 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:02:54.028288 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:02:54.028300 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:02:54.028311 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:02:54.028322 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:02:54.028333 | orchestrator | 2026-04-11 02:02:54.028344 | orchestrator | 2026-04-11 02:02:54.028356 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:02:54.028367 | orchestrator | Saturday 11 April 2026 02:02:53 +0000 (0:00:03.448) 0:00:38.273 ******** 2026-04-11 02:02:54.028378 | orchestrator | =============================================================================== 2026-04-11 02:02:54.028389 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.04s 2026-04-11 02:02:54.028400 | orchestrator | Install required packages (Debian) -------------------------------------- 7.14s 2026-04-11 02:02:54.028411 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.45s 2026-04-11 02:02:54.028422 | orchestrator | Copy fact files --------------------------------------------------------- 3.26s 2026-04-11 02:02:54.028434 | orchestrator | Create custom facts directory ------------------------------------------- 1.48s 2026-04-11 02:02:54.028445 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.22s 2026-04-11 02:02:54.028462 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2026-04-11 02:02:54.309312 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2026-04-11 02:02:54.309432 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2026-04-11 02:02:54.309456 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-04-11 02:02:54.309472 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-04-11 02:02:54.309523 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-04-11 02:02:54.309542 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.25s 2026-04-11 02:02:54.309559 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-04-11 02:02:54.309581 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-04-11 02:02:54.309597 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-04-11 02:02:54.309614 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-04-11 02:02:54.309649 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-04-11 02:02:54.676252 | orchestrator | + osism apply bootstrap 2026-04-11 02:03:06.879690 | orchestrator | 2026-04-11 02:03:06 | INFO  | Task 08e3551b-44ca-40a5-a2a4-2836dc1c7cd4 (bootstrap) was prepared for execution. 2026-04-11 02:03:06.879834 | orchestrator | 2026-04-11 02:03:06 | INFO  | It takes a moment until task 08e3551b-44ca-40a5-a2a4-2836dc1c7cd4 (bootstrap) has been started and output is visible here. 2026-04-11 02:03:23.937792 | orchestrator | 2026-04-11 02:03:23.937900 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-11 02:03:23.937918 | orchestrator | 2026-04-11 02:03:23.937928 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-11 02:03:23.937939 | orchestrator | Saturday 11 April 2026 02:03:11 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-04-11 02:03:23.937949 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:23.937959 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:23.937969 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:23.937978 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:23.937987 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:23.937996 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:23.938006 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:23.938074 | orchestrator | 2026-04-11 02:03:23.938127 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 02:03:23.938136 | orchestrator | 2026-04-11 02:03:23.938150 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 02:03:23.938156 | orchestrator | Saturday 11 April 2026 02:03:11 +0000 (0:00:00.335) 0:00:00.501 ******** 2026-04-11 02:03:23.938162 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:23.938168 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:23.938173 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:23.938179 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:23.938184 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:23.938190 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:23.938195 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:23.938201 | orchestrator | 2026-04-11 02:03:23.938207 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-11 02:03:23.938212 | orchestrator | 2026-04-11 02:03:23.938218 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 02:03:23.938224 | orchestrator | Saturday 11 April 2026 02:03:15 +0000 (0:00:03.627) 0:00:04.128 ******** 2026-04-11 02:03:23.938230 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-11 02:03:23.938236 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-11 02:03:23.938241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-11 02:03:23.938247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:03:23.938253 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-11 02:03:23.938258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:03:23.938263 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-11 02:03:23.938269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:03:23.938275 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-11 02:03:23.938295 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 02:03:23.938301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 02:03:23.938307 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-11 02:03:23.938312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 02:03:23.938318 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-11 02:03:23.938323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-11 02:03:23.938329 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 02:03:23.938335 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 02:03:23.938341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 02:03:23.938348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 02:03:23.938355 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:03:23.938361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 02:03:23.938367 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-11 02:03:23.938374 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 02:03:23.938380 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 02:03:23.938387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 02:03:23.938393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 02:03:23.938400 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-11 02:03:23.938406 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-11 02:03:23.938413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 02:03:23.938419 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:03:23.938426 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 02:03:23.938432 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 02:03:23.938439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 02:03:23.938445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 02:03:23.938451 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:03:23.938457 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-11 02:03:23.938464 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 02:03:23.938470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 02:03:23.938476 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 02:03:23.938483 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 02:03:23.938489 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 02:03:23.938496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 02:03:23.938502 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:03:23.938509 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 02:03:23.938515 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 02:03:23.938521 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:03:23.938528 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 02:03:23.938548 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 02:03:23.938555 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 02:03:23.938561 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 02:03:23.938568 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 02:03:23.938574 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 02:03:23.938580 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 02:03:23.938587 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:03:23.938598 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 02:03:23.938613 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:03:23.938619 | orchestrator | 2026-04-11 02:03:23.938626 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-11 02:03:23.938632 | orchestrator | 2026-04-11 02:03:23.938639 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-11 02:03:23.938646 | orchestrator | Saturday 11 April 2026 02:03:16 +0000 (0:00:00.511) 0:00:04.640 ******** 2026-04-11 02:03:23.938652 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:23.938658 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:23.938664 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:23.938671 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:23.938677 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:23.938683 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:23.938690 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:23.938696 | orchestrator | 2026-04-11 02:03:23.938703 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-11 02:03:23.938709 | orchestrator | Saturday 11 April 2026 02:03:17 +0000 (0:00:01.341) 0:00:05.981 ******** 2026-04-11 02:03:23.938716 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:23.938721 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:23.938726 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:23.938732 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:23.938737 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:23.938743 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:23.938748 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:23.938753 | orchestrator | 2026-04-11 02:03:23.938759 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-11 02:03:23.938765 | orchestrator | Saturday 11 April 2026 02:03:18 +0000 (0:00:01.318) 0:00:07.299 ******** 2026-04-11 02:03:23.938771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:03:23.938778 | orchestrator | 2026-04-11 02:03:23.938784 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-11 02:03:23.938790 | orchestrator | Saturday 11 April 2026 02:03:18 +0000 (0:00:00.302) 0:00:07.601 ******** 2026-04-11 02:03:23.938795 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:03:23.938801 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:03:23.938806 | orchestrator | changed: [testbed-manager] 2026-04-11 02:03:23.938812 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:03:23.938817 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:03:23.938823 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:03:23.938828 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:03:23.938833 | orchestrator | 2026-04-11 02:03:23.938839 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-11 02:03:23.938844 | orchestrator | Saturday 11 April 2026 02:03:21 +0000 (0:00:02.343) 0:00:09.945 ******** 2026-04-11 02:03:23.938850 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:03:23.938856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:03:23.938864 | orchestrator | 2026-04-11 02:03:23.938869 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-11 02:03:23.938875 | orchestrator | Saturday 11 April 2026 02:03:21 +0000 (0:00:00.332) 0:00:10.277 ******** 2026-04-11 02:03:23.938880 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:03:23.938886 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:03:23.938891 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:03:23.938897 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:03:23.938902 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:03:23.938908 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:03:23.938917 | orchestrator | 2026-04-11 02:03:23.938923 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-11 02:03:23.938929 | orchestrator | Saturday 11 April 2026 02:03:22 +0000 (0:00:01.034) 0:00:11.312 ******** 2026-04-11 02:03:23.938934 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:03:23.938940 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:03:23.938945 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:03:23.938950 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:03:23.938956 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:03:23.938961 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:03:23.938967 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:03:23.938972 | orchestrator | 2026-04-11 02:03:23.938978 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-11 02:03:23.938983 | orchestrator | Saturday 11 April 2026 02:03:23 +0000 (0:00:00.594) 0:00:11.907 ******** 2026-04-11 02:03:23.938989 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:03:23.938994 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:03:23.938999 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:03:23.939008 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:03:23.939013 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:03:23.939019 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:03:23.939024 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:23.939030 | orchestrator | 2026-04-11 02:03:23.939035 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-11 02:03:23.939042 | orchestrator | Saturday 11 April 2026 02:03:23 +0000 (0:00:00.456) 0:00:12.363 ******** 2026-04-11 02:03:23.939051 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:03:23.939060 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:03:23.939076 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:03:36.642737 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:03:36.642827 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:03:36.642838 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:03:36.642845 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:03:36.642852 | orchestrator | 2026-04-11 02:03:36.642859 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-11 02:03:36.642868 | orchestrator | Saturday 11 April 2026 02:03:24 +0000 (0:00:00.281) 0:00:12.644 ******** 2026-04-11 02:03:36.642876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:03:36.642898 | orchestrator | 2026-04-11 02:03:36.642908 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-11 02:03:36.642924 | orchestrator | Saturday 11 April 2026 02:03:24 +0000 (0:00:00.359) 0:00:13.004 ******** 2026-04-11 02:03:36.642938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:03:36.642948 | orchestrator | 2026-04-11 02:03:36.642959 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-11 02:03:36.642969 | orchestrator | Saturday 11 April 2026 02:03:24 +0000 (0:00:00.374) 0:00:13.379 ******** 2026-04-11 02:03:36.642980 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:36.642992 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:36.643002 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.643011 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.643018 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:36.643025 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.643031 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643038 | orchestrator | 2026-04-11 02:03:36.643044 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-11 02:03:36.643051 | orchestrator | Saturday 11 April 2026 02:03:26 +0000 (0:00:01.403) 0:00:14.783 ******** 2026-04-11 02:03:36.643078 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:03:36.643085 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:03:36.643091 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:03:36.643154 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:03:36.643162 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:03:36.643168 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:03:36.643174 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:03:36.643180 | orchestrator | 2026-04-11 02:03:36.643187 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-11 02:03:36.643193 | orchestrator | Saturday 11 April 2026 02:03:26 +0000 (0:00:00.361) 0:00:15.144 ******** 2026-04-11 02:03:36.643200 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.643206 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643212 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.643218 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:36.643225 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.643231 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:36.643237 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:36.643243 | orchestrator | 2026-04-11 02:03:36.643249 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-11 02:03:36.643255 | orchestrator | Saturday 11 April 2026 02:03:27 +0000 (0:00:00.550) 0:00:15.695 ******** 2026-04-11 02:03:36.643262 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:03:36.643268 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:03:36.643274 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:03:36.643280 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:03:36.643288 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:03:36.643295 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:03:36.643303 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:03:36.643310 | orchestrator | 2026-04-11 02:03:36.643318 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-11 02:03:36.643326 | orchestrator | Saturday 11 April 2026 02:03:27 +0000 (0:00:00.284) 0:00:15.979 ******** 2026-04-11 02:03:36.643332 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643340 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:03:36.643347 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:03:36.643354 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:03:36.643362 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:03:36.643370 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:03:36.643380 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:03:36.643390 | orchestrator | 2026-04-11 02:03:36.643400 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-11 02:03:36.643409 | orchestrator | Saturday 11 April 2026 02:03:27 +0000 (0:00:00.564) 0:00:16.544 ******** 2026-04-11 02:03:36.643418 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643428 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:03:36.643438 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:03:36.643449 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:03:36.643461 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:03:36.643472 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:03:36.643483 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:03:36.643494 | orchestrator | 2026-04-11 02:03:36.643503 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-11 02:03:36.643510 | orchestrator | Saturday 11 April 2026 02:03:29 +0000 (0:00:01.158) 0:00:17.702 ******** 2026-04-11 02:03:36.643518 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.643533 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:36.643540 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643547 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:36.643554 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:36.643561 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.643568 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.643576 | orchestrator | 2026-04-11 02:03:36.643583 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-11 02:03:36.643597 | orchestrator | Saturday 11 April 2026 02:03:30 +0000 (0:00:01.073) 0:00:18.776 ******** 2026-04-11 02:03:36.643623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:03:36.643635 | orchestrator | 2026-04-11 02:03:36.643645 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-11 02:03:36.643657 | orchestrator | Saturday 11 April 2026 02:03:30 +0000 (0:00:00.402) 0:00:19.179 ******** 2026-04-11 02:03:36.643667 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:03:36.643678 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:03:36.643686 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:03:36.643692 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:03:36.643698 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:03:36.643704 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:03:36.643710 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:03:36.643717 | orchestrator | 2026-04-11 02:03:36.643723 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-11 02:03:36.643729 | orchestrator | Saturday 11 April 2026 02:03:31 +0000 (0:00:01.288) 0:00:20.467 ******** 2026-04-11 02:03:36.643735 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643741 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.643747 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.643753 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.643760 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:36.643766 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:36.643772 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:36.643778 | orchestrator | 2026-04-11 02:03:36.643784 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-11 02:03:36.643791 | orchestrator | Saturday 11 April 2026 02:03:32 +0000 (0:00:00.246) 0:00:20.713 ******** 2026-04-11 02:03:36.643797 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643803 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.643809 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.643815 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.643821 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:36.643827 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:36.643833 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:36.643839 | orchestrator | 2026-04-11 02:03:36.643845 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-11 02:03:36.643851 | orchestrator | Saturday 11 April 2026 02:03:32 +0000 (0:00:00.278) 0:00:20.992 ******** 2026-04-11 02:03:36.643857 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643863 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.643873 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.643883 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.643893 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:36.643903 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:36.643912 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:36.643922 | orchestrator | 2026-04-11 02:03:36.643932 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-11 02:03:36.643943 | orchestrator | Saturday 11 April 2026 02:03:32 +0000 (0:00:00.262) 0:00:21.254 ******** 2026-04-11 02:03:36.643954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:03:36.643966 | orchestrator | 2026-04-11 02:03:36.643973 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-11 02:03:36.643979 | orchestrator | Saturday 11 April 2026 02:03:33 +0000 (0:00:00.359) 0:00:21.613 ******** 2026-04-11 02:03:36.643985 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.643991 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.644007 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.644021 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.644035 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:36.644044 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:36.644053 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:36.644063 | orchestrator | 2026-04-11 02:03:36.644072 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-11 02:03:36.644082 | orchestrator | Saturday 11 April 2026 02:03:33 +0000 (0:00:00.512) 0:00:22.126 ******** 2026-04-11 02:03:36.644090 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:03:36.644119 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:03:36.644128 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:03:36.644139 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:03:36.644149 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:03:36.644158 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:03:36.644168 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:03:36.644179 | orchestrator | 2026-04-11 02:03:36.644188 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-11 02:03:36.644198 | orchestrator | Saturday 11 April 2026 02:03:33 +0000 (0:00:00.274) 0:00:22.400 ******** 2026-04-11 02:03:36.644209 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.644219 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.644229 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.644239 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.644246 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:03:36.644252 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:03:36.644258 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:03:36.644264 | orchestrator | 2026-04-11 02:03:36.644270 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-11 02:03:36.644276 | orchestrator | Saturday 11 April 2026 02:03:34 +0000 (0:00:01.103) 0:00:23.503 ******** 2026-04-11 02:03:36.644282 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.644288 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.644295 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.644302 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.644308 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:03:36.644314 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:03:36.644320 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:03:36.644326 | orchestrator | 2026-04-11 02:03:36.644332 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-11 02:03:36.644339 | orchestrator | Saturday 11 April 2026 02:03:35 +0000 (0:00:00.576) 0:00:24.080 ******** 2026-04-11 02:03:36.644345 | orchestrator | ok: [testbed-manager] 2026-04-11 02:03:36.644352 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:03:36.644358 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:03:36.644373 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:03:36.644388 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:04:18.609096 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:04:18.609251 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:04:18.609275 | orchestrator | 2026-04-11 02:04:18.609290 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-11 02:04:18.609301 | orchestrator | Saturday 11 April 2026 02:03:36 +0000 (0:00:01.147) 0:00:25.228 ******** 2026-04-11 02:04:18.609309 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.609318 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.609327 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.609335 | orchestrator | changed: [testbed-manager] 2026-04-11 02:04:18.609344 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:04:18.609355 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:04:18.609368 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:04:18.609381 | orchestrator | 2026-04-11 02:04:18.609394 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-11 02:04:18.609409 | orchestrator | Saturday 11 April 2026 02:03:51 +0000 (0:00:15.081) 0:00:40.310 ******** 2026-04-11 02:04:18.609423 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.609461 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.609470 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.609478 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.609486 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.609494 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.609501 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.609509 | orchestrator | 2026-04-11 02:04:18.609517 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-11 02:04:18.609525 | orchestrator | Saturday 11 April 2026 02:03:52 +0000 (0:00:00.323) 0:00:40.634 ******** 2026-04-11 02:04:18.609533 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.609543 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.609556 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.609568 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.609580 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.609594 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.609606 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.609618 | orchestrator | 2026-04-11 02:04:18.609630 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-11 02:04:18.609644 | orchestrator | Saturday 11 April 2026 02:03:52 +0000 (0:00:00.265) 0:00:40.899 ******** 2026-04-11 02:04:18.609657 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.609671 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.609684 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.609696 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.609709 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.609719 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.609732 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.609745 | orchestrator | 2026-04-11 02:04:18.609759 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-11 02:04:18.609773 | orchestrator | Saturday 11 April 2026 02:03:52 +0000 (0:00:00.261) 0:00:41.161 ******** 2026-04-11 02:04:18.609790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:04:18.609805 | orchestrator | 2026-04-11 02:04:18.609819 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-11 02:04:18.609834 | orchestrator | Saturday 11 April 2026 02:03:52 +0000 (0:00:00.377) 0:00:41.539 ******** 2026-04-11 02:04:18.609846 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.609859 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.609873 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.609881 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.609889 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.609897 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.609904 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.609912 | orchestrator | 2026-04-11 02:04:18.609925 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-11 02:04:18.609938 | orchestrator | Saturday 11 April 2026 02:03:54 +0000 (0:00:01.791) 0:00:43.331 ******** 2026-04-11 02:04:18.609950 | orchestrator | changed: [testbed-manager] 2026-04-11 02:04:18.609965 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:04:18.609978 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:04:18.609991 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:04:18.610003 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:04:18.610078 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:04:18.610095 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:04:18.610110 | orchestrator | 2026-04-11 02:04:18.610124 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-11 02:04:18.610195 | orchestrator | Saturday 11 April 2026 02:03:55 +0000 (0:00:01.081) 0:00:44.412 ******** 2026-04-11 02:04:18.610208 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.610216 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.610224 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.610245 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.610253 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.610261 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.610269 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.610277 | orchestrator | 2026-04-11 02:04:18.610285 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-11 02:04:18.610294 | orchestrator | Saturday 11 April 2026 02:03:56 +0000 (0:00:00.801) 0:00:45.214 ******** 2026-04-11 02:04:18.610303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:04:18.610314 | orchestrator | 2026-04-11 02:04:18.610336 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-11 02:04:18.610345 | orchestrator | Saturday 11 April 2026 02:03:56 +0000 (0:00:00.335) 0:00:45.549 ******** 2026-04-11 02:04:18.610354 | orchestrator | changed: [testbed-manager] 2026-04-11 02:04:18.610362 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:04:18.610370 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:04:18.610378 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:04:18.610386 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:04:18.610394 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:04:18.610402 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:04:18.610410 | orchestrator | 2026-04-11 02:04:18.610439 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-11 02:04:18.610448 | orchestrator | Saturday 11 April 2026 02:03:58 +0000 (0:00:01.144) 0:00:46.694 ******** 2026-04-11 02:04:18.610456 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:04:18.610465 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:04:18.610473 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:04:18.610481 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:04:18.610489 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:04:18.610497 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:04:18.610505 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:04:18.610513 | orchestrator | 2026-04-11 02:04:18.610521 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-11 02:04:18.610529 | orchestrator | Saturday 11 April 2026 02:03:58 +0000 (0:00:00.264) 0:00:46.958 ******** 2026-04-11 02:04:18.610537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:04:18.610545 | orchestrator | 2026-04-11 02:04:18.610554 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-11 02:04:18.610562 | orchestrator | Saturday 11 April 2026 02:03:58 +0000 (0:00:00.381) 0:00:47.340 ******** 2026-04-11 02:04:18.610570 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.610577 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.610585 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.610593 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.610601 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.610609 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.610617 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.610625 | orchestrator | 2026-04-11 02:04:18.610633 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-11 02:04:18.610641 | orchestrator | Saturday 11 April 2026 02:04:00 +0000 (0:00:01.541) 0:00:48.882 ******** 2026-04-11 02:04:18.610649 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:04:18.610657 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:04:18.610665 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:04:18.610673 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:04:18.610681 | orchestrator | changed: [testbed-manager] 2026-04-11 02:04:18.610689 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:04:18.610697 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:04:18.610711 | orchestrator | 2026-04-11 02:04:18.610719 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-11 02:04:18.610727 | orchestrator | Saturday 11 April 2026 02:04:01 +0000 (0:00:01.104) 0:00:49.986 ******** 2026-04-11 02:04:18.610735 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:04:18.610743 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:04:18.610751 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:04:18.610759 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:04:18.610767 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:04:18.610775 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:04:18.610783 | orchestrator | changed: [testbed-manager] 2026-04-11 02:04:18.610791 | orchestrator | 2026-04-11 02:04:18.610799 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-11 02:04:18.610807 | orchestrator | Saturday 11 April 2026 02:04:15 +0000 (0:00:13.973) 0:01:03.960 ******** 2026-04-11 02:04:18.610815 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.610823 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.610831 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.610843 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.610856 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.610869 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.610883 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.610896 | orchestrator | 2026-04-11 02:04:18.610909 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-11 02:04:18.610943 | orchestrator | Saturday 11 April 2026 02:04:16 +0000 (0:00:01.321) 0:01:05.282 ******** 2026-04-11 02:04:18.610952 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.610960 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.610968 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.610975 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.610983 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.610991 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.610999 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.611007 | orchestrator | 2026-04-11 02:04:18.611015 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-11 02:04:18.611026 | orchestrator | Saturday 11 April 2026 02:04:17 +0000 (0:00:01.020) 0:01:06.303 ******** 2026-04-11 02:04:18.611039 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.611060 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.611073 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.611085 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.611098 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.611110 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.611124 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.611136 | orchestrator | 2026-04-11 02:04:18.611167 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-11 02:04:18.611178 | orchestrator | Saturday 11 April 2026 02:04:17 +0000 (0:00:00.278) 0:01:06.582 ******** 2026-04-11 02:04:18.611191 | orchestrator | ok: [testbed-manager] 2026-04-11 02:04:18.611204 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:04:18.611216 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:04:18.611229 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:04:18.611241 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:04:18.611254 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:04:18.611268 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:04:18.611283 | orchestrator | 2026-04-11 02:04:18.611305 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-11 02:04:18.611314 | orchestrator | Saturday 11 April 2026 02:04:18 +0000 (0:00:00.260) 0:01:06.842 ******** 2026-04-11 02:04:18.611323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:04:18.611332 | orchestrator | 2026-04-11 02:04:18.611350 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-11 02:06:41.716573 | orchestrator | Saturday 11 April 2026 02:04:18 +0000 (0:00:00.354) 0:01:07.196 ******** 2026-04-11 02:06:41.716693 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:06:41.716710 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:06:41.716723 | orchestrator | ok: [testbed-manager] 2026-04-11 02:06:41.716734 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:06:41.716745 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:06:41.716756 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:06:41.716767 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:06:41.716778 | orchestrator | 2026-04-11 02:06:41.716791 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-11 02:06:41.716803 | orchestrator | Saturday 11 April 2026 02:04:20 +0000 (0:00:01.563) 0:01:08.759 ******** 2026-04-11 02:06:41.716814 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:06:41.716826 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:06:41.716837 | orchestrator | changed: [testbed-manager] 2026-04-11 02:06:41.716848 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:06:41.716859 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:06:41.716870 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:06:41.716881 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:06:41.716892 | orchestrator | 2026-04-11 02:06:41.716903 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-11 02:06:41.716915 | orchestrator | Saturday 11 April 2026 02:04:20 +0000 (0:00:00.617) 0:01:09.376 ******** 2026-04-11 02:06:41.716926 | orchestrator | ok: [testbed-manager] 2026-04-11 02:06:41.716937 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:06:41.716948 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:06:41.716959 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:06:41.716970 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:06:41.716981 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:06:41.716992 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:06:41.717003 | orchestrator | 2026-04-11 02:06:41.717015 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-11 02:06:41.717026 | orchestrator | Saturday 11 April 2026 02:04:21 +0000 (0:00:00.302) 0:01:09.679 ******** 2026-04-11 02:06:41.717037 | orchestrator | ok: [testbed-manager] 2026-04-11 02:06:41.717048 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:06:41.717059 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:06:41.717070 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:06:41.717081 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:06:41.717092 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:06:41.717104 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:06:41.717117 | orchestrator | 2026-04-11 02:06:41.717130 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-11 02:06:41.717143 | orchestrator | Saturday 11 April 2026 02:04:22 +0000 (0:00:01.142) 0:01:10.822 ******** 2026-04-11 02:06:41.717155 | orchestrator | changed: [testbed-manager] 2026-04-11 02:06:41.717167 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:06:41.717179 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:06:41.717192 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:06:41.717204 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:06:41.717217 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:06:41.717230 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:06:41.717241 | orchestrator | 2026-04-11 02:06:41.717259 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-11 02:06:41.717272 | orchestrator | Saturday 11 April 2026 02:04:23 +0000 (0:00:01.706) 0:01:12.529 ******** 2026-04-11 02:06:41.717285 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:06:41.717324 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:06:41.717337 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:06:41.717350 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:06:41.717363 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:06:41.717376 | orchestrator | ok: [testbed-manager] 2026-04-11 02:06:41.717389 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:06:41.717401 | orchestrator | 2026-04-11 02:06:41.717414 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-11 02:06:41.717453 | orchestrator | Saturday 11 April 2026 02:04:26 +0000 (0:00:02.502) 0:01:15.032 ******** 2026-04-11 02:06:41.717465 | orchestrator | ok: [testbed-manager] 2026-04-11 02:06:41.717476 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:06:41.717487 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:06:41.717498 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:06:41.717509 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:06:41.717520 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:06:41.717531 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:06:41.717542 | orchestrator | 2026-04-11 02:06:41.717553 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-11 02:06:41.717564 | orchestrator | Saturday 11 April 2026 02:05:04 +0000 (0:00:37.675) 0:01:52.707 ******** 2026-04-11 02:06:41.717575 | orchestrator | changed: [testbed-manager] 2026-04-11 02:06:41.717586 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:06:41.717597 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:06:41.717608 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:06:41.717619 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:06:41.717631 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:06:41.717641 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:06:41.717652 | orchestrator | 2026-04-11 02:06:41.717663 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-11 02:06:41.717675 | orchestrator | Saturday 11 April 2026 02:06:24 +0000 (0:01:19.926) 0:03:12.633 ******** 2026-04-11 02:06:41.717686 | orchestrator | ok: [testbed-manager] 2026-04-11 02:06:41.717697 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:06:41.717708 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:06:41.717719 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:06:41.717730 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:06:41.717741 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:06:41.717752 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:06:41.717762 | orchestrator | 2026-04-11 02:06:41.717774 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-11 02:06:41.717785 | orchestrator | Saturday 11 April 2026 02:06:25 +0000 (0:00:01.791) 0:03:14.424 ******** 2026-04-11 02:06:41.717796 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:06:41.717807 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:06:41.717818 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:06:41.717829 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:06:41.717840 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:06:41.717851 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:06:41.717862 | orchestrator | changed: [testbed-manager] 2026-04-11 02:06:41.717873 | orchestrator | 2026-04-11 02:06:41.717884 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-11 02:06:41.717895 | orchestrator | Saturday 11 April 2026 02:06:40 +0000 (0:00:14.564) 0:03:28.989 ******** 2026-04-11 02:06:41.717942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-11 02:06:41.717976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-11 02:06:41.718002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-11 02:06:41.718066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-11 02:06:41.718080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-11 02:06:41.718092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-11 02:06:41.718103 | orchestrator | 2026-04-11 02:06:41.718115 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-11 02:06:41.718126 | orchestrator | Saturday 11 April 2026 02:06:40 +0000 (0:00:00.446) 0:03:29.435 ******** 2026-04-11 02:06:41.718137 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-11 02:06:41.718148 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-11 02:06:41.718158 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:06:41.718169 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-11 02:06:41.718180 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:06:41.718191 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:06:41.718202 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-11 02:06:41.718213 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:06:41.718224 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 02:06:41.718235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 02:06:41.718246 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 02:06:41.718257 | orchestrator | 2026-04-11 02:06:41.718268 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-11 02:06:41.718279 | orchestrator | Saturday 11 April 2026 02:06:41 +0000 (0:00:00.793) 0:03:30.229 ******** 2026-04-11 02:06:41.718290 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-11 02:06:41.718360 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-11 02:06:41.718372 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-11 02:06:41.718383 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-11 02:06:41.718395 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-11 02:06:41.718415 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-11 02:06:47.529088 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-11 02:06:47.529225 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-11 02:06:47.529287 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-11 02:06:47.529349 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-11 02:06:47.529371 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-11 02:06:47.529388 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-11 02:06:47.529408 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:06:47.529428 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-11 02:06:47.529445 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-11 02:06:47.529465 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-11 02:06:47.529483 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-11 02:06:47.529501 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-11 02:06:47.529519 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-11 02:06:47.529538 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-11 02:06:47.529555 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-11 02:06:47.529573 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-11 02:06:47.529592 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-11 02:06:47.529613 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-11 02:06:47.529632 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-11 02:06:47.529652 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-11 02:06:47.529670 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-11 02:06:47.529691 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-11 02:06:47.529710 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-11 02:06:47.529732 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-11 02:06:47.529751 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-11 02:06:47.529770 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:06:47.529789 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:06:47.529808 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-11 02:06:47.529828 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-11 02:06:47.529847 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-11 02:06:47.529867 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-11 02:06:47.529886 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-11 02:06:47.529904 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-11 02:06:47.529922 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-11 02:06:47.529940 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-11 02:06:47.529960 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-11 02:06:47.529997 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-11 02:06:47.530103 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:06:47.530141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-11 02:06:47.530154 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-11 02:06:47.530165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-11 02:06:47.530176 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-11 02:06:47.530186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-11 02:06:47.530220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-11 02:06:47.530232 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-11 02:06:47.530244 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-11 02:06:47.530254 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-11 02:06:47.530265 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-11 02:06:47.530276 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-11 02:06:47.530287 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-11 02:06:47.530363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-11 02:06:47.530378 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-11 02:06:47.530389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-11 02:06:47.530400 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-11 02:06:47.530411 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-11 02:06:47.530422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-11 02:06:47.530433 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-11 02:06:47.530444 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-11 02:06:47.530455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-11 02:06:47.530466 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-11 02:06:47.530477 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-11 02:06:47.530488 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-11 02:06:47.530499 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-11 02:06:47.530509 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-11 02:06:47.530520 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-11 02:06:47.530531 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-11 02:06:47.530543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-11 02:06:47.530554 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-11 02:06:47.530576 | orchestrator | 2026-04-11 02:06:47.530588 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-11 02:06:47.530599 | orchestrator | Saturday 11 April 2026 02:06:46 +0000 (0:00:04.725) 0:03:34.954 ******** 2026-04-11 02:06:47.530610 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 02:06:47.530621 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 02:06:47.530632 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 02:06:47.530643 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 02:06:47.530654 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 02:06:47.530664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 02:06:47.530675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-11 02:06:47.530686 | orchestrator | 2026-04-11 02:06:47.530697 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-11 02:06:47.530707 | orchestrator | Saturday 11 April 2026 02:06:46 +0000 (0:00:00.628) 0:03:35.582 ******** 2026-04-11 02:06:47.530718 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 02:06:47.530729 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:06:47.530740 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 02:06:47.530757 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 02:06:47.530768 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:06:47.530779 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:06:47.530790 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 02:06:47.530801 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:06:47.530811 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 02:06:47.530822 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 02:06:47.530848 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 02:07:01.234452 | orchestrator | 2026-04-11 02:07:01.234569 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-11 02:07:01.234586 | orchestrator | Saturday 11 April 2026 02:06:47 +0000 (0:00:00.531) 0:03:36.114 ******** 2026-04-11 02:07:01.234599 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 02:07:01.234611 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:07:01.234624 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 02:07:01.234635 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:07:01.234646 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 02:07:01.234658 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-11 02:07:01.234669 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:07:01.234680 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:07:01.234691 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 02:07:01.234702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 02:07:01.234713 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-11 02:07:01.234725 | orchestrator | 2026-04-11 02:07:01.234736 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-11 02:07:01.234774 | orchestrator | Saturday 11 April 2026 02:06:48 +0000 (0:00:00.653) 0:03:36.768 ******** 2026-04-11 02:07:01.234785 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-11 02:07:01.234797 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:07:01.234808 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-11 02:07:01.234819 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-11 02:07:01.234829 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:07:01.234840 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:07:01.234851 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-11 02:07:01.234862 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:07:01.234874 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-11 02:07:01.234885 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-11 02:07:01.234896 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-11 02:07:01.234907 | orchestrator | 2026-04-11 02:07:01.234919 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-11 02:07:01.234930 | orchestrator | Saturday 11 April 2026 02:06:48 +0000 (0:00:00.609) 0:03:37.377 ******** 2026-04-11 02:07:01.234944 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:07:01.234957 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:07:01.234970 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:07:01.234984 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:07:01.234997 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:07:01.235028 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:07:01.235040 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:07:01.235062 | orchestrator | 2026-04-11 02:07:01.235073 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-11 02:07:01.235085 | orchestrator | Saturday 11 April 2026 02:06:49 +0000 (0:00:00.348) 0:03:37.725 ******** 2026-04-11 02:07:01.235096 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:07:01.235108 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:07:01.235119 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:07:01.235129 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:07:01.235140 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:07:01.235151 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:07:01.235162 | orchestrator | ok: [testbed-manager] 2026-04-11 02:07:01.235173 | orchestrator | 2026-04-11 02:07:01.235184 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-11 02:07:01.235195 | orchestrator | Saturday 11 April 2026 02:06:55 +0000 (0:00:05.902) 0:03:43.628 ******** 2026-04-11 02:07:01.235207 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-11 02:07:01.235218 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-11 02:07:01.235229 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:07:01.235240 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:07:01.235251 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-11 02:07:01.235262 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-11 02:07:01.235273 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:07:01.235285 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-11 02:07:01.235296 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:07:01.235308 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-11 02:07:01.235399 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:07:01.235412 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:07:01.235423 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-11 02:07:01.235434 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:07:01.235445 | orchestrator | 2026-04-11 02:07:01.235466 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-11 02:07:01.235477 | orchestrator | Saturday 11 April 2026 02:06:55 +0000 (0:00:00.362) 0:03:43.990 ******** 2026-04-11 02:07:01.235488 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-11 02:07:01.235499 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-11 02:07:01.235510 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-11 02:07:01.235539 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-11 02:07:01.235551 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-11 02:07:01.235562 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-11 02:07:01.235572 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-11 02:07:01.235583 | orchestrator | 2026-04-11 02:07:01.235594 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-11 02:07:01.235605 | orchestrator | Saturday 11 April 2026 02:06:56 +0000 (0:00:01.165) 0:03:45.156 ******** 2026-04-11 02:07:01.235619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:07:01.235632 | orchestrator | 2026-04-11 02:07:01.235643 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-11 02:07:01.235654 | orchestrator | Saturday 11 April 2026 02:06:57 +0000 (0:00:00.486) 0:03:45.642 ******** 2026-04-11 02:07:01.235664 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:07:01.235675 | orchestrator | ok: [testbed-manager] 2026-04-11 02:07:01.235686 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:07:01.235696 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:07:01.235707 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:07:01.235718 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:07:01.235728 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:07:01.235739 | orchestrator | 2026-04-11 02:07:01.235750 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-11 02:07:01.235761 | orchestrator | Saturday 11 April 2026 02:06:58 +0000 (0:00:01.260) 0:03:46.903 ******** 2026-04-11 02:07:01.235771 | orchestrator | ok: [testbed-manager] 2026-04-11 02:07:01.235782 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:07:01.235793 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:07:01.235803 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:07:01.235814 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:07:01.235824 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:07:01.235835 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:07:01.235846 | orchestrator | 2026-04-11 02:07:01.235856 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-11 02:07:01.235867 | orchestrator | Saturday 11 April 2026 02:06:58 +0000 (0:00:00.639) 0:03:47.543 ******** 2026-04-11 02:07:01.235878 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:07:01.235889 | orchestrator | changed: [testbed-manager] 2026-04-11 02:07:01.235900 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:07:01.235910 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:07:01.235921 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:07:01.235932 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:07:01.235942 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:07:01.235953 | orchestrator | 2026-04-11 02:07:01.235964 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-11 02:07:01.235974 | orchestrator | Saturday 11 April 2026 02:06:59 +0000 (0:00:00.668) 0:03:48.212 ******** 2026-04-11 02:07:01.235985 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:07:01.235996 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:07:01.236007 | orchestrator | ok: [testbed-manager] 2026-04-11 02:07:01.236018 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:07:01.236029 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:07:01.236039 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:07:01.236050 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:07:01.236060 | orchestrator | 2026-04-11 02:07:01.236072 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-11 02:07:01.236089 | orchestrator | Saturday 11 April 2026 02:07:00 +0000 (0:00:00.615) 0:03:48.827 ******** 2026-04-11 02:07:01.236105 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775871716.071818, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:01.236119 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775871702.2904031, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:01.236137 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775871680.6083899, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:01.236172 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775871704.0457132, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.340839 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775871710.3448522, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.340930 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775871702.419942, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.340938 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775871693.076655, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.340961 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.340967 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.340982 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.340987 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.341003 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.341008 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.341013 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 02:07:06.341023 | orchestrator | 2026-04-11 02:07:06.341029 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-11 02:07:06.341035 | orchestrator | Saturday 11 April 2026 02:07:01 +0000 (0:00:00.997) 0:03:49.824 ******** 2026-04-11 02:07:06.341040 | orchestrator | changed: [testbed-manager] 2026-04-11 02:07:06.341046 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:07:06.341050 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:07:06.341055 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:07:06.341060 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:07:06.341065 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:07:06.341069 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:07:06.341074 | orchestrator | 2026-04-11 02:07:06.341079 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-11 02:07:06.341083 | orchestrator | Saturday 11 April 2026 02:07:02 +0000 (0:00:01.147) 0:03:50.972 ******** 2026-04-11 02:07:06.341088 | orchestrator | changed: [testbed-manager] 2026-04-11 02:07:06.341093 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:07:06.341097 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:07:06.341102 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:07:06.341106 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:07:06.341111 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:07:06.341115 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:07:06.341120 | orchestrator | 2026-04-11 02:07:06.341125 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-11 02:07:06.341129 | orchestrator | Saturday 11 April 2026 02:07:03 +0000 (0:00:01.191) 0:03:52.164 ******** 2026-04-11 02:07:06.341134 | orchestrator | changed: [testbed-manager] 2026-04-11 02:07:06.341138 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:07:06.341143 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:07:06.341147 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:07:06.341152 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:07:06.341157 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:07:06.341163 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:07:06.341170 | orchestrator | 2026-04-11 02:07:06.341178 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-11 02:07:06.341184 | orchestrator | Saturday 11 April 2026 02:07:04 +0000 (0:00:01.181) 0:03:53.345 ******** 2026-04-11 02:07:06.341191 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:07:06.341197 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:07:06.341208 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:07:06.341215 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:07:06.341221 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:07:06.341231 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:07:06.341241 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:07:06.341248 | orchestrator | 2026-04-11 02:07:06.341255 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-11 02:07:06.341263 | orchestrator | Saturday 11 April 2026 02:07:05 +0000 (0:00:00.342) 0:03:53.687 ******** 2026-04-11 02:07:06.341270 | orchestrator | ok: [testbed-manager] 2026-04-11 02:07:06.341279 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:07:06.341286 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:07:06.341293 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:07:06.341301 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:07:06.341308 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:07:06.341316 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:07:06.341388 | orchestrator | 2026-04-11 02:07:06.341395 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-11 02:07:06.341400 | orchestrator | Saturday 11 April 2026 02:07:05 +0000 (0:00:00.764) 0:03:54.452 ******** 2026-04-11 02:07:06.341409 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:07:06.341421 | orchestrator | 2026-04-11 02:07:06.341427 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-11 02:07:06.341438 | orchestrator | Saturday 11 April 2026 02:07:06 +0000 (0:00:00.479) 0:03:54.932 ******** 2026-04-11 02:08:24.226241 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:24.226365 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:08:24.226383 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:08:24.226455 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:08:24.226479 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:08:24.226497 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:08:24.226515 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:08:24.226534 | orchestrator | 2026-04-11 02:08:24.226553 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-11 02:08:24.226574 | orchestrator | Saturday 11 April 2026 02:07:14 +0000 (0:00:08.180) 0:04:03.112 ******** 2026-04-11 02:08:24.226593 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:24.226650 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:24.226668 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:24.226687 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:24.226705 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:24.226723 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:24.226742 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:24.226761 | orchestrator | 2026-04-11 02:08:24.226781 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-11 02:08:24.226803 | orchestrator | Saturday 11 April 2026 02:07:15 +0000 (0:00:01.204) 0:04:04.317 ******** 2026-04-11 02:08:24.226824 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:24.226844 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:24.226880 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:24.226902 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:24.226914 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:24.226925 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:24.226936 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:24.226947 | orchestrator | 2026-04-11 02:08:24.226959 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-11 02:08:24.226970 | orchestrator | Saturday 11 April 2026 02:07:16 +0000 (0:00:01.180) 0:04:05.497 ******** 2026-04-11 02:08:24.226981 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:24.226992 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:24.227003 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:24.227014 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:24.227026 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:24.227037 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:24.227047 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:24.227058 | orchestrator | 2026-04-11 02:08:24.227069 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-11 02:08:24.227082 | orchestrator | Saturday 11 April 2026 02:07:17 +0000 (0:00:00.330) 0:04:05.827 ******** 2026-04-11 02:08:24.227093 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:24.227103 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:24.227114 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:24.227125 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:24.227136 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:24.227146 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:24.227157 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:24.227168 | orchestrator | 2026-04-11 02:08:24.227179 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-11 02:08:24.227190 | orchestrator | Saturday 11 April 2026 02:07:17 +0000 (0:00:00.402) 0:04:06.230 ******** 2026-04-11 02:08:24.227201 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:24.227212 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:24.227223 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:24.227261 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:24.227273 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:24.227284 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:24.227295 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:24.227306 | orchestrator | 2026-04-11 02:08:24.227317 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-11 02:08:24.227328 | orchestrator | Saturday 11 April 2026 02:07:17 +0000 (0:00:00.348) 0:04:06.578 ******** 2026-04-11 02:08:24.227339 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:24.227349 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:24.227360 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:24.227371 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:24.227382 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:24.227393 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:24.227498 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:24.227510 | orchestrator | 2026-04-11 02:08:24.227521 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-11 02:08:24.227532 | orchestrator | Saturday 11 April 2026 02:07:23 +0000 (0:00:05.531) 0:04:12.110 ******** 2026-04-11 02:08:24.227547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:08:24.227560 | orchestrator | 2026-04-11 02:08:24.227572 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-11 02:08:24.227583 | orchestrator | Saturday 11 April 2026 02:07:23 +0000 (0:00:00.463) 0:04:12.574 ******** 2026-04-11 02:08:24.227594 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-11 02:08:24.227605 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-11 02:08:24.227616 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-11 02:08:24.227627 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-11 02:08:24.227639 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:08:24.227669 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-11 02:08:24.227680 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-11 02:08:24.227691 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:08:24.227702 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-11 02:08:24.227713 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-11 02:08:24.227724 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:08:24.227735 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-11 02:08:24.227746 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-11 02:08:24.227757 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:08:24.227769 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-11 02:08:24.227780 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:08:24.227814 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-11 02:08:24.227826 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:08:24.227837 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-11 02:08:24.227848 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-11 02:08:24.227859 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:08:24.227870 | orchestrator | 2026-04-11 02:08:24.227881 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-11 02:08:24.227892 | orchestrator | Saturday 11 April 2026 02:07:24 +0000 (0:00:00.413) 0:04:12.988 ******** 2026-04-11 02:08:24.227904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:08:24.227915 | orchestrator | 2026-04-11 02:08:24.227926 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-11 02:08:24.227948 | orchestrator | Saturday 11 April 2026 02:07:24 +0000 (0:00:00.425) 0:04:13.413 ******** 2026-04-11 02:08:24.227959 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-11 02:08:24.227970 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:08:24.227981 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-11 02:08:24.227992 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-11 02:08:24.228017 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:08:24.228039 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-11 02:08:24.228050 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:08:24.228061 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-11 02:08:24.228072 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:08:24.228083 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-11 02:08:24.228094 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:08:24.228105 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:08:24.228116 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-11 02:08:24.228127 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:08:24.228138 | orchestrator | 2026-04-11 02:08:24.228149 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-11 02:08:24.228160 | orchestrator | Saturday 11 April 2026 02:07:25 +0000 (0:00:00.391) 0:04:13.805 ******** 2026-04-11 02:08:24.228172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:08:24.228183 | orchestrator | 2026-04-11 02:08:24.228195 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-11 02:08:24.228206 | orchestrator | Saturday 11 April 2026 02:07:25 +0000 (0:00:00.463) 0:04:14.268 ******** 2026-04-11 02:08:24.228217 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:08:24.228228 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:08:24.228239 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:08:24.228250 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:08:24.228261 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:08:24.228272 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:08:24.228283 | orchestrator | changed: [testbed-manager] 2026-04-11 02:08:24.228294 | orchestrator | 2026-04-11 02:08:24.228306 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-11 02:08:24.228317 | orchestrator | Saturday 11 April 2026 02:08:00 +0000 (0:00:35.005) 0:04:49.274 ******** 2026-04-11 02:08:24.228328 | orchestrator | changed: [testbed-manager] 2026-04-11 02:08:24.228339 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:08:24.228350 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:08:24.228361 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:08:24.228372 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:08:24.228383 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:08:24.228394 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:08:24.228437 | orchestrator | 2026-04-11 02:08:24.228458 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-11 02:08:24.228477 | orchestrator | Saturday 11 April 2026 02:08:08 +0000 (0:00:08.051) 0:04:57.325 ******** 2026-04-11 02:08:24.228489 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:08:24.228500 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:08:24.228510 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:08:24.228521 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:08:24.228532 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:08:24.228543 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:08:24.228554 | orchestrator | changed: [testbed-manager] 2026-04-11 02:08:24.228564 | orchestrator | 2026-04-11 02:08:24.228575 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-11 02:08:24.228594 | orchestrator | Saturday 11 April 2026 02:08:16 +0000 (0:00:07.784) 0:05:05.110 ******** 2026-04-11 02:08:24.228605 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:24.228616 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:24.228627 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:24.228638 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:24.228649 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:24.228660 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:24.228670 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:24.228681 | orchestrator | 2026-04-11 02:08:24.228692 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-11 02:08:24.228704 | orchestrator | Saturday 11 April 2026 02:08:18 +0000 (0:00:01.772) 0:05:06.882 ******** 2026-04-11 02:08:24.228715 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:08:24.228725 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:08:24.228736 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:08:24.228747 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:08:24.228758 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:08:24.228769 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:08:24.228780 | orchestrator | changed: [testbed-manager] 2026-04-11 02:08:24.228791 | orchestrator | 2026-04-11 02:08:24.228809 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-11 02:08:37.173098 | orchestrator | Saturday 11 April 2026 02:08:24 +0000 (0:00:05.931) 0:05:12.814 ******** 2026-04-11 02:08:37.173201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:08:37.173213 | orchestrator | 2026-04-11 02:08:37.173221 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-11 02:08:37.173229 | orchestrator | Saturday 11 April 2026 02:08:24 +0000 (0:00:00.491) 0:05:13.305 ******** 2026-04-11 02:08:37.173235 | orchestrator | changed: [testbed-manager] 2026-04-11 02:08:37.173243 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:08:37.173249 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:08:37.173256 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:08:37.173262 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:08:37.173268 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:08:37.173275 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:08:37.173281 | orchestrator | 2026-04-11 02:08:37.173288 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-11 02:08:37.173295 | orchestrator | Saturday 11 April 2026 02:08:25 +0000 (0:00:00.811) 0:05:14.116 ******** 2026-04-11 02:08:37.173302 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:37.173310 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:37.173316 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:37.173322 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:37.173328 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:37.173334 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:37.173340 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:37.173347 | orchestrator | 2026-04-11 02:08:37.173354 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-11 02:08:37.173361 | orchestrator | Saturday 11 April 2026 02:08:27 +0000 (0:00:01.648) 0:05:15.765 ******** 2026-04-11 02:08:37.173368 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:08:37.173376 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:08:37.173382 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:08:37.173390 | orchestrator | changed: [testbed-manager] 2026-04-11 02:08:37.173397 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:08:37.173405 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:08:37.173473 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:08:37.173480 | orchestrator | 2026-04-11 02:08:37.173487 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-11 02:08:37.173494 | orchestrator | Saturday 11 April 2026 02:08:28 +0000 (0:00:01.701) 0:05:17.467 ******** 2026-04-11 02:08:37.173526 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:08:37.173534 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:08:37.173541 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:08:37.173548 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:08:37.173556 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:08:37.173563 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:08:37.173569 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:08:37.173576 | orchestrator | 2026-04-11 02:08:37.173583 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-11 02:08:37.173591 | orchestrator | Saturday 11 April 2026 02:08:29 +0000 (0:00:00.325) 0:05:17.792 ******** 2026-04-11 02:08:37.173598 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:08:37.173605 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:08:37.173613 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:08:37.173620 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:08:37.173627 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:08:37.173635 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:08:37.173642 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:08:37.173648 | orchestrator | 2026-04-11 02:08:37.173654 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-11 02:08:37.173662 | orchestrator | Saturday 11 April 2026 02:08:29 +0000 (0:00:00.479) 0:05:18.272 ******** 2026-04-11 02:08:37.173671 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:37.173680 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:37.173688 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:37.173696 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:37.173704 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:37.173712 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:37.173720 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:37.173728 | orchestrator | 2026-04-11 02:08:37.173736 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-11 02:08:37.173757 | orchestrator | Saturday 11 April 2026 02:08:30 +0000 (0:00:00.355) 0:05:18.627 ******** 2026-04-11 02:08:37.173765 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:08:37.173770 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:08:37.173778 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:08:37.173783 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:08:37.173790 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:08:37.173797 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:08:37.173803 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:08:37.173808 | orchestrator | 2026-04-11 02:08:37.173815 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-11 02:08:37.173823 | orchestrator | Saturday 11 April 2026 02:08:30 +0000 (0:00:00.346) 0:05:18.973 ******** 2026-04-11 02:08:37.173831 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:37.173838 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:37.173845 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:37.173853 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:37.173860 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:37.173867 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:37.173872 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:37.173879 | orchestrator | 2026-04-11 02:08:37.173885 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-11 02:08:37.173892 | orchestrator | Saturday 11 April 2026 02:08:30 +0000 (0:00:00.358) 0:05:19.332 ******** 2026-04-11 02:08:37.173899 | orchestrator | ok: [testbed-manager] =>  2026-04-11 02:08:37.173908 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 02:08:37.173915 | orchestrator | ok: [testbed-node-3] =>  2026-04-11 02:08:37.173922 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 02:08:37.173929 | orchestrator | ok: [testbed-node-4] =>  2026-04-11 02:08:37.173936 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 02:08:37.173943 | orchestrator | ok: [testbed-node-5] =>  2026-04-11 02:08:37.173951 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 02:08:37.173976 | orchestrator | ok: [testbed-node-0] =>  2026-04-11 02:08:37.173995 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 02:08:37.174003 | orchestrator | ok: [testbed-node-1] =>  2026-04-11 02:08:37.174009 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 02:08:37.174071 | orchestrator | ok: [testbed-node-2] =>  2026-04-11 02:08:37.174079 | orchestrator |  docker_version: 5:27.5.1 2026-04-11 02:08:37.174085 | orchestrator | 2026-04-11 02:08:37.174091 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-11 02:08:37.174097 | orchestrator | Saturday 11 April 2026 02:08:31 +0000 (0:00:00.348) 0:05:19.680 ******** 2026-04-11 02:08:37.174104 | orchestrator | ok: [testbed-manager] =>  2026-04-11 02:08:37.174111 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 02:08:37.174117 | orchestrator | ok: [testbed-node-3] =>  2026-04-11 02:08:37.174124 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 02:08:37.174130 | orchestrator | ok: [testbed-node-4] =>  2026-04-11 02:08:37.174136 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 02:08:37.174140 | orchestrator | ok: [testbed-node-5] =>  2026-04-11 02:08:37.174144 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 02:08:37.174148 | orchestrator | ok: [testbed-node-0] =>  2026-04-11 02:08:37.174152 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 02:08:37.174156 | orchestrator | ok: [testbed-node-1] =>  2026-04-11 02:08:37.174162 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 02:08:37.174168 | orchestrator | ok: [testbed-node-2] =>  2026-04-11 02:08:37.174174 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-11 02:08:37.174179 | orchestrator | 2026-04-11 02:08:37.174188 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-11 02:08:37.174197 | orchestrator | Saturday 11 April 2026 02:08:31 +0000 (0:00:00.349) 0:05:20.029 ******** 2026-04-11 02:08:37.174203 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:08:37.174209 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:08:37.174215 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:08:37.174221 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:08:37.174228 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:08:37.174234 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:08:37.174241 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:08:37.174247 | orchestrator | 2026-04-11 02:08:37.174254 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-11 02:08:37.174260 | orchestrator | Saturday 11 April 2026 02:08:31 +0000 (0:00:00.321) 0:05:20.350 ******** 2026-04-11 02:08:37.174266 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:08:37.174272 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:08:37.174278 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:08:37.174284 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:08:37.174288 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:08:37.174292 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:08:37.174296 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:08:37.174300 | orchestrator | 2026-04-11 02:08:37.174304 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-11 02:08:37.174308 | orchestrator | Saturday 11 April 2026 02:08:32 +0000 (0:00:00.373) 0:05:20.724 ******** 2026-04-11 02:08:37.174313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:08:37.174319 | orchestrator | 2026-04-11 02:08:37.174323 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-11 02:08:37.174327 | orchestrator | Saturday 11 April 2026 02:08:32 +0000 (0:00:00.497) 0:05:21.221 ******** 2026-04-11 02:08:37.174332 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:37.174338 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:37.174344 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:37.174350 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:37.174356 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:37.174370 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:37.174377 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:37.174383 | orchestrator | 2026-04-11 02:08:37.174390 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-11 02:08:37.174396 | orchestrator | Saturday 11 April 2026 02:08:33 +0000 (0:00:00.996) 0:05:22.218 ******** 2026-04-11 02:08:37.174403 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:08:37.174428 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:08:37.174435 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:08:37.174442 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:08:37.174449 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:08:37.174463 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:08:37.174470 | orchestrator | ok: [testbed-manager] 2026-04-11 02:08:37.174476 | orchestrator | 2026-04-11 02:08:37.174483 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-11 02:08:37.174491 | orchestrator | Saturday 11 April 2026 02:08:36 +0000 (0:00:03.076) 0:05:25.295 ******** 2026-04-11 02:08:37.174497 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-11 02:08:37.174505 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-11 02:08:37.174510 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-11 02:08:37.174517 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:08:37.174524 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-11 02:08:37.174530 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-11 02:08:37.174538 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-11 02:08:37.174544 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:08:37.174551 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-11 02:08:37.174555 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-11 02:08:37.174559 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-11 02:08:37.174563 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-11 02:08:37.174569 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-11 02:08:37.174575 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-11 02:08:37.174581 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:08:37.174587 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-11 02:08:37.174606 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-11 02:09:37.222697 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-11 02:09:37.222819 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:09:37.222836 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-11 02:09:37.222849 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-11 02:09:37.222861 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-11 02:09:37.222872 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:09:37.222883 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:09:37.222894 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-11 02:09:37.222905 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-11 02:09:37.222917 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-11 02:09:37.222928 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:09:37.222939 | orchestrator | 2026-04-11 02:09:37.222952 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-11 02:09:37.222964 | orchestrator | Saturday 11 April 2026 02:08:37 +0000 (0:00:00.709) 0:05:26.004 ******** 2026-04-11 02:09:37.222975 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.222986 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.222997 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.223007 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.223019 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.223030 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.223041 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.223079 | orchestrator | 2026-04-11 02:09:37.223090 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-11 02:09:37.223101 | orchestrator | Saturday 11 April 2026 02:08:43 +0000 (0:00:06.340) 0:05:32.345 ******** 2026-04-11 02:09:37.223112 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.223123 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.223134 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.223145 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.223156 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.223166 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.223177 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.223188 | orchestrator | 2026-04-11 02:09:37.223199 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-11 02:09:37.223212 | orchestrator | Saturday 11 April 2026 02:08:44 +0000 (0:00:01.116) 0:05:33.462 ******** 2026-04-11 02:09:37.223251 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.223272 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.223291 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.223315 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.223343 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.223362 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.223380 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.223399 | orchestrator | 2026-04-11 02:09:37.223416 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-11 02:09:37.223434 | orchestrator | Saturday 11 April 2026 02:08:52 +0000 (0:00:07.861) 0:05:41.323 ******** 2026-04-11 02:09:37.223452 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.223470 | orchestrator | changed: [testbed-manager] 2026-04-11 02:09:37.223489 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.223506 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.223525 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.223545 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.223560 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.223578 | orchestrator | 2026-04-11 02:09:37.223621 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-11 02:09:37.223642 | orchestrator | Saturday 11 April 2026 02:08:56 +0000 (0:00:03.378) 0:05:44.702 ******** 2026-04-11 02:09:37.223660 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.223696 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.223714 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.223731 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.223747 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.223764 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.223782 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.223800 | orchestrator | 2026-04-11 02:09:37.223819 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-11 02:09:37.223839 | orchestrator | Saturday 11 April 2026 02:08:57 +0000 (0:00:01.398) 0:05:46.101 ******** 2026-04-11 02:09:37.223857 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.223875 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.223893 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.223911 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.223927 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.223938 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.223949 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.223960 | orchestrator | 2026-04-11 02:09:37.223971 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-11 02:09:37.223982 | orchestrator | Saturday 11 April 2026 02:08:59 +0000 (0:00:01.667) 0:05:47.768 ******** 2026-04-11 02:09:37.223993 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:09:37.224004 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:09:37.224014 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:09:37.224025 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:09:37.224053 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:09:37.224064 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:09:37.224074 | orchestrator | changed: [testbed-manager] 2026-04-11 02:09:37.224085 | orchestrator | 2026-04-11 02:09:37.224096 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-11 02:09:37.224107 | orchestrator | Saturday 11 April 2026 02:08:59 +0000 (0:00:00.705) 0:05:48.474 ******** 2026-04-11 02:09:37.224118 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.224128 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.224139 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.224150 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.224161 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.224171 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.224182 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.224193 | orchestrator | 2026-04-11 02:09:37.224204 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-11 02:09:37.224270 | orchestrator | Saturday 11 April 2026 02:09:09 +0000 (0:00:09.248) 0:05:57.723 ******** 2026-04-11 02:09:37.224291 | orchestrator | changed: [testbed-manager] 2026-04-11 02:09:37.224310 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.224328 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.224345 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.224363 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.224383 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.224401 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.224421 | orchestrator | 2026-04-11 02:09:37.224433 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-11 02:09:37.224444 | orchestrator | Saturday 11 April 2026 02:09:10 +0000 (0:00:00.955) 0:05:58.678 ******** 2026-04-11 02:09:37.224455 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.224466 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.224476 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.224487 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.224497 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.224508 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.224519 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.224529 | orchestrator | 2026-04-11 02:09:37.224540 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-11 02:09:37.224551 | orchestrator | Saturday 11 April 2026 02:09:19 +0000 (0:00:09.172) 0:06:07.851 ******** 2026-04-11 02:09:37.224562 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.224573 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.224583 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.224594 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.224605 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.224615 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.224626 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.224636 | orchestrator | 2026-04-11 02:09:37.224647 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-11 02:09:37.224658 | orchestrator | Saturday 11 April 2026 02:09:30 +0000 (0:00:11.190) 0:06:19.041 ******** 2026-04-11 02:09:37.224669 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-11 02:09:37.224680 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-11 02:09:37.224690 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-11 02:09:37.224701 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-11 02:09:37.224712 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-11 02:09:37.224722 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-11 02:09:37.224733 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-11 02:09:37.224744 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-11 02:09:37.224755 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-11 02:09:37.224766 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-11 02:09:37.224786 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-11 02:09:37.224849 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-11 02:09:37.224862 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-11 02:09:37.224873 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-11 02:09:37.224884 | orchestrator | 2026-04-11 02:09:37.224895 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-11 02:09:37.224906 | orchestrator | Saturday 11 April 2026 02:09:31 +0000 (0:00:01.273) 0:06:20.314 ******** 2026-04-11 02:09:37.224917 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:09:37.224928 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:09:37.224939 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:09:37.224949 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:09:37.224960 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:09:37.224971 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:09:37.224982 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:09:37.224993 | orchestrator | 2026-04-11 02:09:37.225004 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-11 02:09:37.225015 | orchestrator | Saturday 11 April 2026 02:09:32 +0000 (0:00:00.608) 0:06:20.923 ******** 2026-04-11 02:09:37.225025 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:37.225036 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:37.225047 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:37.225058 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:37.225069 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:37.225080 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:37.225096 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:37.225106 | orchestrator | 2026-04-11 02:09:37.225118 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-11 02:09:37.225130 | orchestrator | Saturday 11 April 2026 02:09:36 +0000 (0:00:03.795) 0:06:24.719 ******** 2026-04-11 02:09:37.225141 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:09:37.225152 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:09:37.225163 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:09:37.225174 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:09:37.225184 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:09:37.225195 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:09:37.225206 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:09:37.225217 | orchestrator | 2026-04-11 02:09:37.225258 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-11 02:09:37.225270 | orchestrator | Saturday 11 April 2026 02:09:36 +0000 (0:00:00.548) 0:06:25.268 ******** 2026-04-11 02:09:37.225281 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-11 02:09:37.225293 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-11 02:09:37.225304 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:09:37.225315 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-11 02:09:37.225326 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-11 02:09:37.225337 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:09:37.225348 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-11 02:09:37.225359 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-11 02:09:37.225370 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:09:37.225392 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-11 02:09:57.533911 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-11 02:09:57.534122 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:09:57.534149 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-11 02:09:57.534251 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-11 02:09:57.534267 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:09:57.534309 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-11 02:09:57.534323 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-11 02:09:57.534336 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:09:57.534348 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-11 02:09:57.534360 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-11 02:09:57.534368 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:09:57.534375 | orchestrator | 2026-04-11 02:09:57.534385 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-11 02:09:57.534393 | orchestrator | Saturday 11 April 2026 02:09:37 +0000 (0:00:00.845) 0:06:26.113 ******** 2026-04-11 02:09:57.534401 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:09:57.534408 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:09:57.534416 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:09:57.534423 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:09:57.534430 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:09:57.534439 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:09:57.534448 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:09:57.534457 | orchestrator | 2026-04-11 02:09:57.534470 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-11 02:09:57.534482 | orchestrator | Saturday 11 April 2026 02:09:38 +0000 (0:00:00.603) 0:06:26.717 ******** 2026-04-11 02:09:57.534495 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:09:57.534506 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:09:57.534517 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:09:57.534529 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:09:57.534541 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:09:57.534553 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:09:57.534566 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:09:57.534580 | orchestrator | 2026-04-11 02:09:57.534593 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-11 02:09:57.534604 | orchestrator | Saturday 11 April 2026 02:09:38 +0000 (0:00:00.561) 0:06:27.279 ******** 2026-04-11 02:09:57.534612 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:09:57.534619 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:09:57.534626 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:09:57.534633 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:09:57.534640 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:09:57.534648 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:09:57.534655 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:09:57.534662 | orchestrator | 2026-04-11 02:09:57.534669 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-11 02:09:57.534677 | orchestrator | Saturday 11 April 2026 02:09:39 +0000 (0:00:00.613) 0:06:27.892 ******** 2026-04-11 02:09:57.534684 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.534692 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:09:57.534699 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:09:57.534706 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:09:57.534713 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:09:57.534721 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:09:57.534728 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:09:57.534735 | orchestrator | 2026-04-11 02:09:57.534742 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-11 02:09:57.534750 | orchestrator | Saturday 11 April 2026 02:09:41 +0000 (0:00:01.958) 0:06:29.851 ******** 2026-04-11 02:09:57.534758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:09:57.534767 | orchestrator | 2026-04-11 02:09:57.534775 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-11 02:09:57.534782 | orchestrator | Saturday 11 April 2026 02:09:42 +0000 (0:00:01.036) 0:06:30.887 ******** 2026-04-11 02:09:57.534807 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.534814 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:57.534822 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:57.534829 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:57.534837 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:57.534847 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:57.534859 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:57.534871 | orchestrator | 2026-04-11 02:09:57.534884 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-11 02:09:57.534896 | orchestrator | Saturday 11 April 2026 02:09:43 +0000 (0:00:00.936) 0:06:31.824 ******** 2026-04-11 02:09:57.534907 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.534919 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:57.534930 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:57.534941 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:57.534952 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:57.534962 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:57.534973 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:57.534985 | orchestrator | 2026-04-11 02:09:57.534998 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-11 02:09:57.535011 | orchestrator | Saturday 11 April 2026 02:09:44 +0000 (0:00:00.879) 0:06:32.704 ******** 2026-04-11 02:09:57.535024 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.535036 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:57.535048 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:57.535060 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:57.535072 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:57.535084 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:57.535097 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:57.535110 | orchestrator | 2026-04-11 02:09:57.535123 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-11 02:09:57.535159 | orchestrator | Saturday 11 April 2026 02:09:45 +0000 (0:00:01.637) 0:06:34.341 ******** 2026-04-11 02:09:57.535195 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:09:57.535207 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:09:57.535214 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:09:57.535222 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:09:57.535229 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:09:57.535237 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:09:57.535244 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:09:57.535251 | orchestrator | 2026-04-11 02:09:57.535259 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-11 02:09:57.535267 | orchestrator | Saturday 11 April 2026 02:09:47 +0000 (0:00:01.378) 0:06:35.720 ******** 2026-04-11 02:09:57.535274 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.535281 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:57.535289 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:57.535296 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:57.535303 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:57.535311 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:57.535318 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:57.535325 | orchestrator | 2026-04-11 02:09:57.535333 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-11 02:09:57.535340 | orchestrator | Saturday 11 April 2026 02:09:48 +0000 (0:00:01.371) 0:06:37.091 ******** 2026-04-11 02:09:57.535347 | orchestrator | changed: [testbed-manager] 2026-04-11 02:09:57.535355 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:09:57.535363 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:09:57.535375 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:09:57.535387 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:09:57.535399 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:09:57.535411 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:09:57.535423 | orchestrator | 2026-04-11 02:09:57.535446 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-11 02:09:57.535459 | orchestrator | Saturday 11 April 2026 02:09:49 +0000 (0:00:01.456) 0:06:38.547 ******** 2026-04-11 02:09:57.535472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:09:57.535485 | orchestrator | 2026-04-11 02:09:57.535497 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-11 02:09:57.535509 | orchestrator | Saturday 11 April 2026 02:09:51 +0000 (0:00:01.076) 0:06:39.623 ******** 2026-04-11 02:09:57.535519 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:09:57.535530 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:09:57.535541 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.535552 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:09:57.535564 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:09:57.535576 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:09:57.535589 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:09:57.535602 | orchestrator | 2026-04-11 02:09:57.535615 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-11 02:09:57.535627 | orchestrator | Saturday 11 April 2026 02:09:52 +0000 (0:00:01.345) 0:06:40.969 ******** 2026-04-11 02:09:57.535639 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.535651 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:09:57.535663 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:09:57.535675 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:09:57.535687 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:09:57.535699 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:09:57.535712 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:09:57.535725 | orchestrator | 2026-04-11 02:09:57.535737 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-11 02:09:57.535750 | orchestrator | Saturday 11 April 2026 02:09:53 +0000 (0:00:01.122) 0:06:42.092 ******** 2026-04-11 02:09:57.535762 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.535774 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:09:57.535786 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:09:57.535799 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:09:57.535811 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:09:57.535824 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:09:57.535836 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:09:57.535849 | orchestrator | 2026-04-11 02:09:57.535862 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-11 02:09:57.535874 | orchestrator | Saturday 11 April 2026 02:09:54 +0000 (0:00:01.183) 0:06:43.275 ******** 2026-04-11 02:09:57.535886 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:09:57.535916 | orchestrator | ok: [testbed-manager] 2026-04-11 02:09:57.535930 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:09:57.535943 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:09:57.535955 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:09:57.535967 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:09:57.535979 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:09:57.535991 | orchestrator | 2026-04-11 02:09:57.536002 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-11 02:09:57.536014 | orchestrator | Saturday 11 April 2026 02:09:56 +0000 (0:00:01.437) 0:06:44.713 ******** 2026-04-11 02:09:57.536026 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:09:57.536039 | orchestrator | 2026-04-11 02:09:57.536051 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 02:09:57.536063 | orchestrator | Saturday 11 April 2026 02:09:57 +0000 (0:00:01.025) 0:06:45.738 ******** 2026-04-11 02:09:57.536075 | orchestrator | 2026-04-11 02:09:57.536086 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 02:09:57.536107 | orchestrator | Saturday 11 April 2026 02:09:57 +0000 (0:00:00.075) 0:06:45.813 ******** 2026-04-11 02:09:57.536118 | orchestrator | 2026-04-11 02:09:57.536129 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 02:09:57.536218 | orchestrator | Saturday 11 April 2026 02:09:57 +0000 (0:00:00.049) 0:06:45.863 ******** 2026-04-11 02:09:57.536235 | orchestrator | 2026-04-11 02:09:57.536247 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 02:09:57.536272 | orchestrator | Saturday 11 April 2026 02:09:57 +0000 (0:00:00.044) 0:06:45.908 ******** 2026-04-11 02:10:24.103237 | orchestrator | 2026-04-11 02:10:24.103406 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 02:10:24.103432 | orchestrator | Saturday 11 April 2026 02:09:57 +0000 (0:00:00.043) 0:06:45.951 ******** 2026-04-11 02:10:24.103450 | orchestrator | 2026-04-11 02:10:24.103469 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 02:10:24.103487 | orchestrator | Saturday 11 April 2026 02:09:57 +0000 (0:00:00.049) 0:06:46.000 ******** 2026-04-11 02:10:24.103503 | orchestrator | 2026-04-11 02:10:24.103519 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-11 02:10:24.103537 | orchestrator | Saturday 11 April 2026 02:09:57 +0000 (0:00:00.069) 0:06:46.070 ******** 2026-04-11 02:10:24.103554 | orchestrator | 2026-04-11 02:10:24.103571 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-11 02:10:24.103588 | orchestrator | Saturday 11 April 2026 02:09:57 +0000 (0:00:00.047) 0:06:46.117 ******** 2026-04-11 02:10:24.103637 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:24.103658 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:24.103679 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:24.103703 | orchestrator | 2026-04-11 02:10:24.103737 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-11 02:10:24.103754 | orchestrator | Saturday 11 April 2026 02:09:58 +0000 (0:00:01.141) 0:06:47.259 ******** 2026-04-11 02:10:24.103771 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:24.103786 | orchestrator | changed: [testbed-manager] 2026-04-11 02:10:24.103801 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:24.103815 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:24.103830 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:24.103845 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:24.103861 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:24.103878 | orchestrator | 2026-04-11 02:10:24.103895 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-11 02:10:24.103913 | orchestrator | Saturday 11 April 2026 02:10:00 +0000 (0:00:01.568) 0:06:48.827 ******** 2026-04-11 02:10:24.103925 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:24.103935 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:24.103945 | orchestrator | changed: [testbed-manager] 2026-04-11 02:10:24.103954 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:24.103964 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:24.103974 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:24.103984 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:24.103994 | orchestrator | 2026-04-11 02:10:24.104004 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-11 02:10:24.104013 | orchestrator | Saturday 11 April 2026 02:10:01 +0000 (0:00:01.474) 0:06:50.303 ******** 2026-04-11 02:10:24.104023 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:10:24.104033 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:24.104042 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:24.104052 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:24.104062 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:24.104072 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:24.104081 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:24.104120 | orchestrator | 2026-04-11 02:10:24.104138 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-11 02:10:24.104154 | orchestrator | Saturday 11 April 2026 02:10:04 +0000 (0:00:02.465) 0:06:52.768 ******** 2026-04-11 02:10:24.104210 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:10:24.104257 | orchestrator | 2026-04-11 02:10:24.104273 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-11 02:10:24.104289 | orchestrator | Saturday 11 April 2026 02:10:04 +0000 (0:00:00.100) 0:06:52.869 ******** 2026-04-11 02:10:24.104305 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:24.104322 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:24.104338 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:24.104354 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:24.104369 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:24.104379 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:24.104388 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:24.104398 | orchestrator | 2026-04-11 02:10:24.104408 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-11 02:10:24.104419 | orchestrator | Saturday 11 April 2026 02:10:05 +0000 (0:00:01.105) 0:06:53.975 ******** 2026-04-11 02:10:24.104428 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:10:24.104454 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:10:24.104464 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:10:24.104473 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:10:24.104483 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:10:24.104492 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:10:24.104502 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:10:24.104511 | orchestrator | 2026-04-11 02:10:24.104521 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-11 02:10:24.104530 | orchestrator | Saturday 11 April 2026 02:10:05 +0000 (0:00:00.605) 0:06:54.580 ******** 2026-04-11 02:10:24.104541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:10:24.104565 | orchestrator | 2026-04-11 02:10:24.104585 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-11 02:10:24.104595 | orchestrator | Saturday 11 April 2026 02:10:07 +0000 (0:00:01.244) 0:06:55.825 ******** 2026-04-11 02:10:24.104605 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:24.104614 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:24.104624 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:24.104633 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:24.104643 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:24.104653 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:24.104663 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:24.104672 | orchestrator | 2026-04-11 02:10:24.104682 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-11 02:10:24.104692 | orchestrator | Saturday 11 April 2026 02:10:08 +0000 (0:00:00.908) 0:06:56.733 ******** 2026-04-11 02:10:24.104702 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-11 02:10:24.104735 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-11 02:10:24.104746 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-11 02:10:24.104756 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-11 02:10:24.104765 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-11 02:10:24.104775 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-11 02:10:24.104784 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-11 02:10:24.104794 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-11 02:10:24.104804 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-11 02:10:24.104813 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-11 02:10:24.104823 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-11 02:10:24.104832 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-11 02:10:24.104855 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-11 02:10:24.104864 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-11 02:10:24.104874 | orchestrator | 2026-04-11 02:10:24.104883 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-11 02:10:24.104893 | orchestrator | Saturday 11 April 2026 02:10:10 +0000 (0:00:02.462) 0:06:59.196 ******** 2026-04-11 02:10:24.104902 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:10:24.104912 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:10:24.104921 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:10:24.104931 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:10:24.104940 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:10:24.104950 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:10:24.104959 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:10:24.104969 | orchestrator | 2026-04-11 02:10:24.104979 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-11 02:10:24.104989 | orchestrator | Saturday 11 April 2026 02:10:11 +0000 (0:00:00.765) 0:06:59.962 ******** 2026-04-11 02:10:24.105001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:10:24.105013 | orchestrator | 2026-04-11 02:10:24.105027 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-11 02:10:24.105043 | orchestrator | Saturday 11 April 2026 02:10:12 +0000 (0:00:00.914) 0:07:00.876 ******** 2026-04-11 02:10:24.105060 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:24.105076 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:24.105114 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:24.105131 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:24.105147 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:24.105157 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:24.105166 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:24.105176 | orchestrator | 2026-04-11 02:10:24.105186 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-11 02:10:24.105196 | orchestrator | Saturday 11 April 2026 02:10:13 +0000 (0:00:00.903) 0:07:01.780 ******** 2026-04-11 02:10:24.105205 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:24.105215 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:24.105224 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:24.105234 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:24.105243 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:24.105253 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:24.105262 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:24.105272 | orchestrator | 2026-04-11 02:10:24.105281 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-11 02:10:24.105291 | orchestrator | Saturday 11 April 2026 02:10:14 +0000 (0:00:01.116) 0:07:02.896 ******** 2026-04-11 02:10:24.105301 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:10:24.105310 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:10:24.105320 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:10:24.105329 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:10:24.105339 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:10:24.105348 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:10:24.105357 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:10:24.105367 | orchestrator | 2026-04-11 02:10:24.105382 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-11 02:10:24.105405 | orchestrator | Saturday 11 April 2026 02:10:14 +0000 (0:00:00.582) 0:07:03.479 ******** 2026-04-11 02:10:24.105426 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:24.105440 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:24.105455 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:24.105470 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:24.105493 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:24.105523 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:24.105538 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:24.105552 | orchestrator | 2026-04-11 02:10:24.105566 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-11 02:10:24.105581 | orchestrator | Saturday 11 April 2026 02:10:16 +0000 (0:00:01.465) 0:07:04.945 ******** 2026-04-11 02:10:24.105595 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:10:24.105610 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:10:24.105625 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:10:24.105638 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:10:24.105653 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:10:24.105668 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:10:24.105683 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:10:24.105698 | orchestrator | 2026-04-11 02:10:24.105713 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-11 02:10:24.105727 | orchestrator | Saturday 11 April 2026 02:10:16 +0000 (0:00:00.568) 0:07:05.514 ******** 2026-04-11 02:10:24.105743 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:24.105759 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:24.105774 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:24.105788 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:24.105801 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:24.105815 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:24.105846 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:57.871095 | orchestrator | 2026-04-11 02:10:57.871192 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-11 02:10:57.871206 | orchestrator | Saturday 11 April 2026 02:10:24 +0000 (0:00:07.175) 0:07:12.689 ******** 2026-04-11 02:10:57.871214 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.871223 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:57.871231 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:57.871238 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:57.871246 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:57.871253 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:57.871260 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:57.871267 | orchestrator | 2026-04-11 02:10:57.871275 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-11 02:10:57.871283 | orchestrator | Saturday 11 April 2026 02:10:25 +0000 (0:00:01.648) 0:07:14.338 ******** 2026-04-11 02:10:57.871290 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.871298 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:57.871305 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:57.871312 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:57.871319 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:57.871326 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:57.871333 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:57.871340 | orchestrator | 2026-04-11 02:10:57.871347 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-11 02:10:57.871355 | orchestrator | Saturday 11 April 2026 02:10:27 +0000 (0:00:01.823) 0:07:16.162 ******** 2026-04-11 02:10:57.871362 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.871369 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:57.871376 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:57.871383 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:57.871391 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:57.871398 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:57.871405 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:57.871412 | orchestrator | 2026-04-11 02:10:57.871420 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 02:10:57.871427 | orchestrator | Saturday 11 April 2026 02:10:29 +0000 (0:00:01.756) 0:07:17.918 ******** 2026-04-11 02:10:57.871434 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.871441 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:57.871449 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:57.871477 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:57.871484 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:57.871491 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:57.871498 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:57.871506 | orchestrator | 2026-04-11 02:10:57.871513 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 02:10:57.871520 | orchestrator | Saturday 11 April 2026 02:10:30 +0000 (0:00:00.904) 0:07:18.823 ******** 2026-04-11 02:10:57.871528 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:10:57.871535 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:10:57.871542 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:10:57.871549 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:10:57.871556 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:10:57.871564 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:10:57.871573 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:10:57.871581 | orchestrator | 2026-04-11 02:10:57.871589 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-11 02:10:57.871597 | orchestrator | Saturday 11 April 2026 02:10:31 +0000 (0:00:01.170) 0:07:19.993 ******** 2026-04-11 02:10:57.871606 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:10:57.871614 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:10:57.871622 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:10:57.871630 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:10:57.871638 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:10:57.871646 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:10:57.871654 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:10:57.871663 | orchestrator | 2026-04-11 02:10:57.871671 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-11 02:10:57.871679 | orchestrator | Saturday 11 April 2026 02:10:32 +0000 (0:00:00.639) 0:07:20.633 ******** 2026-04-11 02:10:57.871688 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.871710 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:57.871719 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:57.871727 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:57.871736 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:57.871744 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:57.871757 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:57.871765 | orchestrator | 2026-04-11 02:10:57.871773 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-11 02:10:57.871782 | orchestrator | Saturday 11 April 2026 02:10:32 +0000 (0:00:00.634) 0:07:21.267 ******** 2026-04-11 02:10:57.871790 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.871799 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:57.871806 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:57.871814 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:57.871821 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:57.871828 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:57.871835 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:57.871842 | orchestrator | 2026-04-11 02:10:57.871850 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-11 02:10:57.871857 | orchestrator | Saturday 11 April 2026 02:10:33 +0000 (0:00:00.595) 0:07:21.862 ******** 2026-04-11 02:10:57.871864 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.871871 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:57.871879 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:57.871886 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:57.871893 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:57.871900 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:57.871907 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:57.871914 | orchestrator | 2026-04-11 02:10:57.871922 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-11 02:10:57.871929 | orchestrator | Saturday 11 April 2026 02:10:34 +0000 (0:00:00.831) 0:07:22.694 ******** 2026-04-11 02:10:57.871936 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.871943 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:57.871956 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:57.871963 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:57.871971 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:57.871978 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:57.871985 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:57.871992 | orchestrator | 2026-04-11 02:10:57.872026 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-11 02:10:57.872039 | orchestrator | Saturday 11 April 2026 02:10:39 +0000 (0:00:05.509) 0:07:28.203 ******** 2026-04-11 02:10:57.872051 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:10:57.872064 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:10:57.872075 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:10:57.872086 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:10:57.872098 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:10:57.872110 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:10:57.872122 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:10:57.872133 | orchestrator | 2026-04-11 02:10:57.872145 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-11 02:10:57.872158 | orchestrator | Saturday 11 April 2026 02:10:40 +0000 (0:00:00.630) 0:07:28.833 ******** 2026-04-11 02:10:57.872171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:10:57.872186 | orchestrator | 2026-04-11 02:10:57.872199 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-11 02:10:57.872212 | orchestrator | Saturday 11 April 2026 02:10:41 +0000 (0:00:01.204) 0:07:30.038 ******** 2026-04-11 02:10:57.872226 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:57.872234 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.872241 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:57.872248 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:57.872256 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:57.872263 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:57.872270 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:57.872277 | orchestrator | 2026-04-11 02:10:57.872285 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-11 02:10:57.872292 | orchestrator | Saturday 11 April 2026 02:10:43 +0000 (0:00:01.914) 0:07:31.953 ******** 2026-04-11 02:10:57.872299 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.872307 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:57.872314 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:57.872321 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:57.872328 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:57.872335 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:57.872343 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:57.872350 | orchestrator | 2026-04-11 02:10:57.872358 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-11 02:10:57.872365 | orchestrator | Saturday 11 April 2026 02:10:44 +0000 (0:00:01.174) 0:07:33.128 ******** 2026-04-11 02:10:57.872372 | orchestrator | ok: [testbed-manager] 2026-04-11 02:10:57.872379 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:10:57.872387 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:10:57.872394 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:10:57.872401 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:10:57.872408 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:10:57.872415 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:10:57.872423 | orchestrator | 2026-04-11 02:10:57.872430 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-11 02:10:57.872437 | orchestrator | Saturday 11 April 2026 02:10:45 +0000 (0:00:00.922) 0:07:34.050 ******** 2026-04-11 02:10:57.872445 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 02:10:57.872454 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 02:10:57.872469 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 02:10:57.872476 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 02:10:57.872488 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 02:10:57.872496 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 02:10:57.872503 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-11 02:10:57.872510 | orchestrator | 2026-04-11 02:10:57.872518 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-11 02:10:57.872525 | orchestrator | Saturday 11 April 2026 02:10:47 +0000 (0:00:01.980) 0:07:36.031 ******** 2026-04-11 02:10:57.872532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:10:57.872540 | orchestrator | 2026-04-11 02:10:57.872547 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-11 02:10:57.872554 | orchestrator | Saturday 11 April 2026 02:10:48 +0000 (0:00:00.975) 0:07:37.006 ******** 2026-04-11 02:10:57.872562 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:10:57.872569 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:10:57.872576 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:10:57.872584 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:10:57.872591 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:10:57.872598 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:10:57.872605 | orchestrator | changed: [testbed-manager] 2026-04-11 02:10:57.872613 | orchestrator | 2026-04-11 02:10:57.872626 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-11 02:11:31.358236 | orchestrator | Saturday 11 April 2026 02:10:57 +0000 (0:00:09.449) 0:07:46.456 ******** 2026-04-11 02:11:31.358334 | orchestrator | ok: [testbed-manager] 2026-04-11 02:11:31.358345 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:11:31.358351 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:11:31.358356 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:11:31.358361 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:11:31.358368 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:11:31.358373 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:11:31.358379 | orchestrator | 2026-04-11 02:11:31.358386 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-11 02:11:31.358392 | orchestrator | Saturday 11 April 2026 02:11:00 +0000 (0:00:02.214) 0:07:48.670 ******** 2026-04-11 02:11:31.358444 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:11:31.358450 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:11:31.358456 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:11:31.358462 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:11:31.358468 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:11:31.358473 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:11:31.358481 | orchestrator | 2026-04-11 02:11:31.358487 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-11 02:11:31.358493 | orchestrator | Saturday 11 April 2026 02:11:01 +0000 (0:00:01.323) 0:07:49.994 ******** 2026-04-11 02:11:31.358499 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.358506 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.358512 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.358518 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.358524 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.358553 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.358559 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.358566 | orchestrator | 2026-04-11 02:11:31.358572 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-11 02:11:31.358578 | orchestrator | 2026-04-11 02:11:31.358584 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-11 02:11:31.358590 | orchestrator | Saturday 11 April 2026 02:11:02 +0000 (0:00:01.272) 0:07:51.267 ******** 2026-04-11 02:11:31.358595 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:11:31.358601 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:11:31.358606 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:11:31.358613 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:11:31.358619 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:11:31.358624 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:11:31.358630 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:11:31.358635 | orchestrator | 2026-04-11 02:11:31.358641 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-11 02:11:31.358646 | orchestrator | 2026-04-11 02:11:31.358652 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-11 02:11:31.358659 | orchestrator | Saturday 11 April 2026 02:11:03 +0000 (0:00:00.897) 0:07:52.164 ******** 2026-04-11 02:11:31.358664 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.358670 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.358676 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.358681 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.358687 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.358693 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.358699 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.358705 | orchestrator | 2026-04-11 02:11:31.358711 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-11 02:11:31.358718 | orchestrator | Saturday 11 April 2026 02:11:05 +0000 (0:00:01.579) 0:07:53.743 ******** 2026-04-11 02:11:31.358724 | orchestrator | ok: [testbed-manager] 2026-04-11 02:11:31.358729 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:11:31.358735 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:11:31.358761 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:11:31.358768 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:11:31.358788 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:11:31.358795 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:11:31.358801 | orchestrator | 2026-04-11 02:11:31.358807 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-11 02:11:31.358813 | orchestrator | Saturday 11 April 2026 02:11:06 +0000 (0:00:01.447) 0:07:55.191 ******** 2026-04-11 02:11:31.358820 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:11:31.358826 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:11:31.358832 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:11:31.358838 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:11:31.358845 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:11:31.358867 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:11:31.358873 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:11:31.358879 | orchestrator | 2026-04-11 02:11:31.358885 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-11 02:11:31.358891 | orchestrator | Saturday 11 April 2026 02:11:07 +0000 (0:00:00.545) 0:07:55.737 ******** 2026-04-11 02:11:31.358898 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:11:31.358906 | orchestrator | 2026-04-11 02:11:31.358912 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-11 02:11:31.358937 | orchestrator | Saturday 11 April 2026 02:11:08 +0000 (0:00:01.100) 0:07:56.837 ******** 2026-04-11 02:11:31.358946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:11:31.358964 | orchestrator | 2026-04-11 02:11:31.358971 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-11 02:11:31.358976 | orchestrator | Saturday 11 April 2026 02:11:09 +0000 (0:00:00.921) 0:07:57.758 ******** 2026-04-11 02:11:31.358982 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.358988 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.358994 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.359000 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.359007 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.359013 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.359019 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.359025 | orchestrator | 2026-04-11 02:11:31.359050 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-11 02:11:31.359055 | orchestrator | Saturday 11 April 2026 02:11:17 +0000 (0:00:08.753) 0:08:06.511 ******** 2026-04-11 02:11:31.359058 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.359062 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.359066 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.359069 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.359073 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.359077 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.359082 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.359088 | orchestrator | 2026-04-11 02:11:31.359094 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-11 02:11:31.359102 | orchestrator | Saturday 11 April 2026 02:11:19 +0000 (0:00:01.159) 0:08:07.671 ******** 2026-04-11 02:11:31.359110 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.359117 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.359123 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.359150 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.359157 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.359163 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.359169 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.359175 | orchestrator | 2026-04-11 02:11:31.359181 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-11 02:11:31.359188 | orchestrator | Saturday 11 April 2026 02:11:20 +0000 (0:00:01.402) 0:08:09.074 ******** 2026-04-11 02:11:31.359194 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.359200 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.359206 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.359212 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.359218 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.359225 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.359232 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.359238 | orchestrator | 2026-04-11 02:11:31.359244 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-11 02:11:31.359251 | orchestrator | Saturday 11 April 2026 02:11:23 +0000 (0:00:02.803) 0:08:11.877 ******** 2026-04-11 02:11:31.359258 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.359262 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.359265 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.359269 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.359273 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.359276 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.359280 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.359284 | orchestrator | 2026-04-11 02:11:31.359287 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-11 02:11:31.359291 | orchestrator | Saturday 11 April 2026 02:11:24 +0000 (0:00:01.287) 0:08:13.164 ******** 2026-04-11 02:11:31.359295 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.359299 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.359309 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.359313 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.359317 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.359320 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.359324 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.359328 | orchestrator | 2026-04-11 02:11:31.359332 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-11 02:11:31.359335 | orchestrator | 2026-04-11 02:11:31.359339 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-11 02:11:31.359343 | orchestrator | Saturday 11 April 2026 02:11:25 +0000 (0:00:01.166) 0:08:14.331 ******** 2026-04-11 02:11:31.359347 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:11:31.359351 | orchestrator | 2026-04-11 02:11:31.359355 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-11 02:11:31.359359 | orchestrator | Saturday 11 April 2026 02:11:26 +0000 (0:00:01.047) 0:08:15.379 ******** 2026-04-11 02:11:31.359362 | orchestrator | ok: [testbed-manager] 2026-04-11 02:11:31.359366 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:11:31.359370 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:11:31.359374 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:11:31.359377 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:11:31.359381 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:11:31.359390 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:11:31.359394 | orchestrator | 2026-04-11 02:11:31.359398 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-11 02:11:31.359401 | orchestrator | Saturday 11 April 2026 02:11:28 +0000 (0:00:01.231) 0:08:16.611 ******** 2026-04-11 02:11:31.359405 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:31.359409 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:31.359413 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:31.359416 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:31.359420 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:31.359424 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:31.359427 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:31.359431 | orchestrator | 2026-04-11 02:11:31.359435 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-11 02:11:31.359439 | orchestrator | Saturday 11 April 2026 02:11:29 +0000 (0:00:01.260) 0:08:17.872 ******** 2026-04-11 02:11:31.359443 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:11:31.359447 | orchestrator | 2026-04-11 02:11:31.359452 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-11 02:11:31.359458 | orchestrator | Saturday 11 April 2026 02:11:30 +0000 (0:00:01.158) 0:08:19.030 ******** 2026-04-11 02:11:31.359464 | orchestrator | ok: [testbed-manager] 2026-04-11 02:11:31.359474 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:11:31.359499 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:11:31.359519 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:11:31.359526 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:11:31.359532 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:11:31.359538 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:11:31.359544 | orchestrator | 2026-04-11 02:11:31.359558 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-11 02:11:33.231130 | orchestrator | Saturday 11 April 2026 02:11:31 +0000 (0:00:00.914) 0:08:19.945 ******** 2026-04-11 02:11:33.231211 | orchestrator | changed: [testbed-manager] 2026-04-11 02:11:33.231218 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:11:33.231223 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:11:33.231227 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:11:33.231231 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:11:33.231235 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:11:33.231240 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:11:33.231263 | orchestrator | 2026-04-11 02:11:33.231268 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:11:33.231273 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-11 02:11:33.231279 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-11 02:11:33.231283 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-11 02:11:33.231287 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-11 02:11:33.231291 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-04-11 02:11:33.231295 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-11 02:11:33.231298 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-11 02:11:33.231302 | orchestrator | 2026-04-11 02:11:33.231306 | orchestrator | 2026-04-11 02:11:33.231310 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:11:33.231314 | orchestrator | Saturday 11 April 2026 02:11:32 +0000 (0:00:01.249) 0:08:21.195 ******** 2026-04-11 02:11:33.231318 | orchestrator | =============================================================================== 2026-04-11 02:11:33.231322 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.93s 2026-04-11 02:11:33.231325 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.68s 2026-04-11 02:11:33.231329 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.01s 2026-04-11 02:11:33.231333 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.08s 2026-04-11 02:11:33.231337 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.56s 2026-04-11 02:11:33.231341 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.97s 2026-04-11 02:11:33.231345 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.19s 2026-04-11 02:11:33.231350 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.45s 2026-04-11 02:11:33.231354 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.25s 2026-04-11 02:11:33.231357 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.17s 2026-04-11 02:11:33.231361 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.75s 2026-04-11 02:11:33.231365 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.18s 2026-04-11 02:11:33.231369 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.05s 2026-04-11 02:11:33.231383 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.86s 2026-04-11 02:11:33.231387 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.78s 2026-04-11 02:11:33.231391 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.18s 2026-04-11 02:11:33.231394 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.34s 2026-04-11 02:11:33.231398 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.93s 2026-04-11 02:11:33.231402 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.90s 2026-04-11 02:11:33.231406 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.53s 2026-04-11 02:11:33.632366 | orchestrator | + osism apply fail2ban 2026-04-11 02:11:47.005160 | orchestrator | 2026-04-11 02:11:46 | INFO  | Task b47a2a59-1a86-42fe-b0c5-aa73ac894909 (fail2ban) was prepared for execution. 2026-04-11 02:11:47.005324 | orchestrator | 2026-04-11 02:11:46 | INFO  | It takes a moment until task b47a2a59-1a86-42fe-b0c5-aa73ac894909 (fail2ban) has been started and output is visible here. 2026-04-11 02:12:10.401948 | orchestrator | 2026-04-11 02:12:10.402124 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-11 02:12:10.402153 | orchestrator | 2026-04-11 02:12:10.402163 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-11 02:12:10.402172 | orchestrator | Saturday 11 April 2026 02:11:52 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-04-11 02:12:10.402183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:12:10.402200 | orchestrator | 2026-04-11 02:12:10.402214 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-11 02:12:10.402228 | orchestrator | Saturday 11 April 2026 02:11:53 +0000 (0:00:01.252) 0:00:01.554 ******** 2026-04-11 02:12:10.402242 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:12:10.402257 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:12:10.402270 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:12:10.402282 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:12:10.402296 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:12:10.402309 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:12:10.402323 | orchestrator | changed: [testbed-manager] 2026-04-11 02:12:10.402338 | orchestrator | 2026-04-11 02:12:10.402351 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-11 02:12:10.402359 | orchestrator | Saturday 11 April 2026 02:12:05 +0000 (0:00:11.756) 0:00:13.311 ******** 2026-04-11 02:12:10.402368 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:12:10.402376 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:12:10.402384 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:12:10.402392 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:12:10.402400 | orchestrator | changed: [testbed-manager] 2026-04-11 02:12:10.402408 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:12:10.402416 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:12:10.402424 | orchestrator | 2026-04-11 02:12:10.402432 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-11 02:12:10.402440 | orchestrator | Saturday 11 April 2026 02:12:06 +0000 (0:00:01.471) 0:00:14.782 ******** 2026-04-11 02:12:10.402448 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:12:10.402457 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:12:10.402465 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:12:10.402473 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:10.402481 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:12:10.402488 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:12:10.402496 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:12:10.402504 | orchestrator | 2026-04-11 02:12:10.402512 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-11 02:12:10.402520 | orchestrator | Saturday 11 April 2026 02:12:08 +0000 (0:00:01.525) 0:00:16.308 ******** 2026-04-11 02:12:10.402528 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:12:10.402536 | orchestrator | changed: [testbed-manager] 2026-04-11 02:12:10.402544 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:12:10.402552 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:12:10.402559 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:12:10.402567 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:12:10.402575 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:12:10.402583 | orchestrator | 2026-04-11 02:12:10.402591 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:12:10.402599 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:12:10.402634 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:12:10.402643 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:12:10.402651 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:12:10.402659 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:12:10.402667 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:12:10.402675 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:12:10.402683 | orchestrator | 2026-04-11 02:12:10.402691 | orchestrator | 2026-04-11 02:12:10.402699 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:12:10.402707 | orchestrator | Saturday 11 April 2026 02:12:09 +0000 (0:00:01.745) 0:00:18.053 ******** 2026-04-11 02:12:10.402715 | orchestrator | =============================================================================== 2026-04-11 02:12:10.402723 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.76s 2026-04-11 02:12:10.402731 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.75s 2026-04-11 02:12:10.402739 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.53s 2026-04-11 02:12:10.402747 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.47s 2026-04-11 02:12:10.402755 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.25s 2026-04-11 02:12:10.755988 | orchestrator | + osism apply network 2026-04-11 02:12:23.026264 | orchestrator | 2026-04-11 02:12:23 | INFO  | Task 0c026df1-a502-41f5-b0f4-08f557411547 (network) was prepared for execution. 2026-04-11 02:12:23.026378 | orchestrator | 2026-04-11 02:12:23 | INFO  | It takes a moment until task 0c026df1-a502-41f5-b0f4-08f557411547 (network) has been started and output is visible here. 2026-04-11 02:12:53.978336 | orchestrator | 2026-04-11 02:12:53.978445 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-11 02:12:53.978460 | orchestrator | 2026-04-11 02:12:53.978471 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-11 02:12:53.978482 | orchestrator | Saturday 11 April 2026 02:12:27 +0000 (0:00:00.272) 0:00:00.272 ******** 2026-04-11 02:12:53.978492 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:53.978503 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:12:53.978513 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:12:53.978522 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:12:53.978532 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:12:53.978542 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:12:53.978552 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:12:53.978561 | orchestrator | 2026-04-11 02:12:53.978572 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-11 02:12:53.978581 | orchestrator | Saturday 11 April 2026 02:12:28 +0000 (0:00:00.788) 0:00:01.060 ******** 2026-04-11 02:12:53.978594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:12:53.978606 | orchestrator | 2026-04-11 02:12:53.978616 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-11 02:12:53.978626 | orchestrator | Saturday 11 April 2026 02:12:29 +0000 (0:00:01.396) 0:00:02.456 ******** 2026-04-11 02:12:53.978660 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:12:53.978671 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:53.978680 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:12:53.978690 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:12:53.978700 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:12:53.978709 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:12:53.978719 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:12:53.978729 | orchestrator | 2026-04-11 02:12:53.978792 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-11 02:12:53.978802 | orchestrator | Saturday 11 April 2026 02:12:31 +0000 (0:00:02.035) 0:00:04.492 ******** 2026-04-11 02:12:53.978812 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:12:53.978822 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:53.978832 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:12:53.978842 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:12:53.978852 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:12:53.978862 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:12:53.978872 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:12:53.978883 | orchestrator | 2026-04-11 02:12:53.978894 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-11 02:12:53.978906 | orchestrator | Saturday 11 April 2026 02:12:33 +0000 (0:00:01.825) 0:00:06.318 ******** 2026-04-11 02:12:53.978918 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-11 02:12:53.978929 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-11 02:12:53.978941 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-11 02:12:53.978952 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-11 02:12:53.978963 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-11 02:12:53.978974 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-11 02:12:53.978985 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-11 02:12:53.978996 | orchestrator | 2026-04-11 02:12:53.979024 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-11 02:12:53.979036 | orchestrator | Saturday 11 April 2026 02:12:34 +0000 (0:00:01.032) 0:00:07.350 ******** 2026-04-11 02:12:53.979048 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 02:12:53.979060 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 02:12:53.979071 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 02:12:53.979082 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 02:12:53.979093 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 02:12:53.979104 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 02:12:53.979116 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 02:12:53.979127 | orchestrator | 2026-04-11 02:12:53.979138 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-11 02:12:53.979150 | orchestrator | Saturday 11 April 2026 02:12:38 +0000 (0:00:03.717) 0:00:11.067 ******** 2026-04-11 02:12:53.979161 | orchestrator | changed: [testbed-manager] 2026-04-11 02:12:53.979172 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:12:53.979183 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:12:53.979195 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:12:53.979205 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:12:53.979222 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:12:53.979233 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:12:53.979244 | orchestrator | 2026-04-11 02:12:53.979254 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-11 02:12:53.979264 | orchestrator | Saturday 11 April 2026 02:12:40 +0000 (0:00:01.659) 0:00:12.726 ******** 2026-04-11 02:12:53.979274 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 02:12:53.979284 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 02:12:53.979293 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 02:12:53.979303 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 02:12:53.979313 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 02:12:53.979331 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 02:12:53.979341 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 02:12:53.979351 | orchestrator | 2026-04-11 02:12:53.979360 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-11 02:12:53.979370 | orchestrator | Saturday 11 April 2026 02:12:42 +0000 (0:00:01.897) 0:00:14.623 ******** 2026-04-11 02:12:53.979380 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:53.979390 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:12:53.979399 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:12:53.979409 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:12:53.979419 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:12:53.979428 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:12:53.979438 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:12:53.979448 | orchestrator | 2026-04-11 02:12:53.979458 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-11 02:12:53.979489 | orchestrator | Saturday 11 April 2026 02:12:43 +0000 (0:00:01.196) 0:00:15.820 ******** 2026-04-11 02:12:53.979506 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:12:53.979520 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:12:53.979530 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:12:53.979540 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:12:53.979549 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:12:53.979559 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:12:53.979568 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:12:53.979578 | orchestrator | 2026-04-11 02:12:53.979588 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-11 02:12:53.979598 | orchestrator | Saturday 11 April 2026 02:12:44 +0000 (0:00:00.708) 0:00:16.529 ******** 2026-04-11 02:12:53.979607 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:12:53.979617 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:53.979627 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:12:53.979636 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:12:53.979646 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:12:53.979656 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:12:53.979665 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:12:53.979675 | orchestrator | 2026-04-11 02:12:53.979685 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-11 02:12:53.979694 | orchestrator | Saturday 11 April 2026 02:12:46 +0000 (0:00:02.373) 0:00:18.902 ******** 2026-04-11 02:12:53.979704 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:12:53.979714 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:12:53.979723 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:12:53.979753 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:12:53.979763 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:12:53.979773 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:12:53.979784 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-04-11 02:12:53.979795 | orchestrator | 2026-04-11 02:12:53.979805 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-11 02:12:53.979815 | orchestrator | Saturday 11 April 2026 02:12:47 +0000 (0:00:01.022) 0:00:19.924 ******** 2026-04-11 02:12:53.979825 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:53.979834 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:12:53.979844 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:12:53.979854 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:12:53.979863 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:12:53.979873 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:12:53.979883 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:12:53.979892 | orchestrator | 2026-04-11 02:12:53.979902 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-11 02:12:53.979912 | orchestrator | Saturday 11 April 2026 02:12:49 +0000 (0:00:01.803) 0:00:21.727 ******** 2026-04-11 02:12:53.979922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:12:53.979941 | orchestrator | 2026-04-11 02:12:53.979951 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-11 02:12:53.979961 | orchestrator | Saturday 11 April 2026 02:12:50 +0000 (0:00:01.505) 0:00:23.233 ******** 2026-04-11 02:12:53.979971 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:12:53.979980 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:53.979990 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:12:53.980000 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:12:53.980010 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:12:53.980019 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:12:53.980029 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:12:53.980038 | orchestrator | 2026-04-11 02:12:53.980048 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-11 02:12:53.980058 | orchestrator | Saturday 11 April 2026 02:12:51 +0000 (0:00:01.180) 0:00:24.414 ******** 2026-04-11 02:12:53.980068 | orchestrator | ok: [testbed-manager] 2026-04-11 02:12:53.980078 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:12:53.980087 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:12:53.980097 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:12:53.980107 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:12:53.980116 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:12:53.980126 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:12:53.980135 | orchestrator | 2026-04-11 02:12:53.980145 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-11 02:12:53.980155 | orchestrator | Saturday 11 April 2026 02:12:52 +0000 (0:00:00.723) 0:00:25.137 ******** 2026-04-11 02:12:53.980169 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 02:12:53.980180 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 02:12:53.980190 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 02:12:53.980199 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 02:12:53.980209 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 02:12:53.980219 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 02:12:53.980229 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 02:12:53.980238 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 02:12:53.980248 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 02:12:53.980258 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 02:12:53.980267 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 02:12:53.980277 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 02:12:53.980287 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-11 02:12:53.980297 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-11 02:12:53.980306 | orchestrator | 2026-04-11 02:12:53.980323 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-11 02:13:13.334946 | orchestrator | Saturday 11 April 2026 02:12:53 +0000 (0:00:01.319) 0:00:26.456 ******** 2026-04-11 02:13:13.335046 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:13:13.335057 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:13:13.335062 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:13:13.335067 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:13:13.335072 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:13:13.335077 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:13:13.335082 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:13:13.335087 | orchestrator | 2026-04-11 02:13:13.335092 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-11 02:13:13.335115 | orchestrator | Saturday 11 April 2026 02:12:54 +0000 (0:00:00.726) 0:00:27.182 ******** 2026-04-11 02:13:13.335123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-3, testbed-node-0, testbed-node-2, testbed-node-5, testbed-node-4 2026-04-11 02:13:13.335146 | orchestrator | 2026-04-11 02:13:13.335155 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-11 02:13:13.335170 | orchestrator | Saturday 11 April 2026 02:12:59 +0000 (0:00:05.060) 0:00:32.243 ******** 2026-04-11 02:13:13.335180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335188 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335216 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335311 | orchestrator | 2026-04-11 02:13:13.335316 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-11 02:13:13.335322 | orchestrator | Saturday 11 April 2026 02:13:07 +0000 (0:00:07.342) 0:00:39.585 ******** 2026-04-11 02:13:13.335327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335341 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335368 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-11 02:13:13.335373 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:13.335396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:20.660011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-11 02:13:20.660154 | orchestrator | 2026-04-11 02:13:20.660186 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-11 02:13:20.660205 | orchestrator | Saturday 11 April 2026 02:13:13 +0000 (0:00:06.228) 0:00:45.813 ******** 2026-04-11 02:13:20.660223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:13:20.660240 | orchestrator | 2026-04-11 02:13:20.660255 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-11 02:13:20.660270 | orchestrator | Saturday 11 April 2026 02:13:14 +0000 (0:00:01.486) 0:00:47.300 ******** 2026-04-11 02:13:20.660286 | orchestrator | ok: [testbed-manager] 2026-04-11 02:13:20.660303 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:13:20.660319 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:13:20.660336 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:13:20.660353 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:13:20.660370 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:13:20.660387 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:13:20.660404 | orchestrator | 2026-04-11 02:13:20.660421 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-11 02:13:20.660437 | orchestrator | Saturday 11 April 2026 02:13:16 +0000 (0:00:01.294) 0:00:48.594 ******** 2026-04-11 02:13:20.660454 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 02:13:20.660473 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 02:13:20.660490 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 02:13:20.660508 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 02:13:20.660525 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 02:13:20.660542 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 02:13:20.660560 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 02:13:20.660576 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 02:13:20.660594 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:13:20.660613 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 02:13:20.660631 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 02:13:20.660649 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 02:13:20.660666 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:13:20.660715 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 02:13:20.660733 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 02:13:20.660783 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:13:20.660801 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 02:13:20.660817 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 02:13:20.660833 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 02:13:20.660848 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 02:13:20.660883 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 02:13:20.660902 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 02:13:20.660918 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 02:13:20.660934 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:13:20.660951 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 02:13:20.660968 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 02:13:20.660984 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 02:13:20.661001 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 02:13:20.661017 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:13:20.661032 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:13:20.661049 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-11 02:13:20.661067 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-11 02:13:20.661085 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-11 02:13:20.661102 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-11 02:13:20.661119 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:13:20.661136 | orchestrator | 2026-04-11 02:13:20.661152 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-11 02:13:20.661194 | orchestrator | Saturday 11 April 2026 02:13:18 +0000 (0:00:02.376) 0:00:50.970 ******** 2026-04-11 02:13:20.661212 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:13:20.661228 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:13:20.661244 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:13:20.661261 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:13:20.661277 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:13:20.661293 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:13:20.661309 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:13:20.661324 | orchestrator | 2026-04-11 02:13:20.661337 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-11 02:13:20.661346 | orchestrator | Saturday 11 April 2026 02:13:19 +0000 (0:00:00.835) 0:00:51.805 ******** 2026-04-11 02:13:20.661356 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:13:20.661366 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:13:20.661375 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:13:20.661385 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:13:20.661396 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:13:20.661405 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:13:20.661415 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:13:20.661424 | orchestrator | 2026-04-11 02:13:20.661434 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:13:20.661446 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 02:13:20.661458 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 02:13:20.661479 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 02:13:20.661489 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 02:13:20.661499 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 02:13:20.661508 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 02:13:20.661518 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 02:13:20.661530 | orchestrator | 2026-04-11 02:13:20.661547 | orchestrator | 2026-04-11 02:13:20.661561 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:13:20.661574 | orchestrator | Saturday 11 April 2026 02:13:20 +0000 (0:00:00.839) 0:00:52.645 ******** 2026-04-11 02:13:20.661589 | orchestrator | =============================================================================== 2026-04-11 02:13:20.661603 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 7.34s 2026-04-11 02:13:20.661619 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.23s 2026-04-11 02:13:20.661635 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.06s 2026-04-11 02:13:20.661652 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.72s 2026-04-11 02:13:20.661668 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.38s 2026-04-11 02:13:20.661712 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.37s 2026-04-11 02:13:20.661724 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.04s 2026-04-11 02:13:20.661733 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.90s 2026-04-11 02:13:20.661751 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.83s 2026-04-11 02:13:20.661761 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.80s 2026-04-11 02:13:20.661771 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2026-04-11 02:13:20.661780 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.51s 2026-04-11 02:13:20.661790 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.49s 2026-04-11 02:13:20.661799 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.40s 2026-04-11 02:13:20.661809 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.32s 2026-04-11 02:13:20.661819 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.29s 2026-04-11 02:13:20.661828 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.20s 2026-04-11 02:13:20.661838 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-04-11 02:13:20.661848 | orchestrator | osism.commons.network : Create required directories --------------------- 1.03s 2026-04-11 02:13:20.661857 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.02s 2026-04-11 02:13:21.066161 | orchestrator | + osism apply wireguard 2026-04-11 02:13:33.420564 | orchestrator | 2026-04-11 02:13:33 | INFO  | Task f44b8353-a7ae-4bc3-9d45-3ee67d4395d6 (wireguard) was prepared for execution. 2026-04-11 02:13:33.420733 | orchestrator | 2026-04-11 02:13:33 | INFO  | It takes a moment until task f44b8353-a7ae-4bc3-9d45-3ee67d4395d6 (wireguard) has been started and output is visible here. 2026-04-11 02:13:56.030082 | orchestrator | 2026-04-11 02:13:56.030179 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-11 02:13:56.030213 | orchestrator | 2026-04-11 02:13:56.030222 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-11 02:13:56.030233 | orchestrator | Saturday 11 April 2026 02:13:38 +0000 (0:00:00.261) 0:00:00.261 ******** 2026-04-11 02:13:56.030245 | orchestrator | ok: [testbed-manager] 2026-04-11 02:13:56.030259 | orchestrator | 2026-04-11 02:13:56.030269 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-11 02:13:56.030280 | orchestrator | Saturday 11 April 2026 02:13:39 +0000 (0:00:01.704) 0:00:01.966 ******** 2026-04-11 02:13:56.030290 | orchestrator | changed: [testbed-manager] 2026-04-11 02:13:56.030302 | orchestrator | 2026-04-11 02:13:56.030317 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-11 02:13:56.030328 | orchestrator | Saturday 11 April 2026 02:13:47 +0000 (0:00:07.621) 0:00:09.587 ******** 2026-04-11 02:13:56.030339 | orchestrator | changed: [testbed-manager] 2026-04-11 02:13:56.030351 | orchestrator | 2026-04-11 02:13:56.030361 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-11 02:13:56.030373 | orchestrator | Saturday 11 April 2026 02:13:48 +0000 (0:00:00.607) 0:00:10.195 ******** 2026-04-11 02:13:56.030383 | orchestrator | changed: [testbed-manager] 2026-04-11 02:13:56.030395 | orchestrator | 2026-04-11 02:13:56.030430 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-11 02:13:56.030442 | orchestrator | Saturday 11 April 2026 02:13:48 +0000 (0:00:00.484) 0:00:10.680 ******** 2026-04-11 02:13:56.030454 | orchestrator | ok: [testbed-manager] 2026-04-11 02:13:56.030466 | orchestrator | 2026-04-11 02:13:56.030478 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-11 02:13:56.030490 | orchestrator | Saturday 11 April 2026 02:13:49 +0000 (0:00:00.809) 0:00:11.489 ******** 2026-04-11 02:13:56.030503 | orchestrator | ok: [testbed-manager] 2026-04-11 02:13:56.030514 | orchestrator | 2026-04-11 02:13:56.030521 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-11 02:13:56.030529 | orchestrator | Saturday 11 April 2026 02:13:49 +0000 (0:00:00.457) 0:00:11.947 ******** 2026-04-11 02:13:56.030536 | orchestrator | ok: [testbed-manager] 2026-04-11 02:13:56.030543 | orchestrator | 2026-04-11 02:13:56.030550 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-11 02:13:56.030558 | orchestrator | Saturday 11 April 2026 02:13:50 +0000 (0:00:00.502) 0:00:12.449 ******** 2026-04-11 02:13:56.030565 | orchestrator | changed: [testbed-manager] 2026-04-11 02:13:56.030572 | orchestrator | 2026-04-11 02:13:56.030579 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-11 02:13:56.030587 | orchestrator | Saturday 11 April 2026 02:13:51 +0000 (0:00:01.230) 0:00:13.679 ******** 2026-04-11 02:13:56.030594 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-11 02:13:56.030601 | orchestrator | changed: [testbed-manager] 2026-04-11 02:13:56.030608 | orchestrator | 2026-04-11 02:13:56.030640 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-11 02:13:56.030647 | orchestrator | Saturday 11 April 2026 02:13:52 +0000 (0:00:01.012) 0:00:14.692 ******** 2026-04-11 02:13:56.030654 | orchestrator | changed: [testbed-manager] 2026-04-11 02:13:56.030662 | orchestrator | 2026-04-11 02:13:56.030670 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-11 02:13:56.030677 | orchestrator | Saturday 11 April 2026 02:13:54 +0000 (0:00:01.859) 0:00:16.552 ******** 2026-04-11 02:13:56.030684 | orchestrator | changed: [testbed-manager] 2026-04-11 02:13:56.030691 | orchestrator | 2026-04-11 02:13:56.030699 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:13:56.030706 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:13:56.030715 | orchestrator | 2026-04-11 02:13:56.030722 | orchestrator | 2026-04-11 02:13:56.030729 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:13:56.030736 | orchestrator | Saturday 11 April 2026 02:13:55 +0000 (0:00:01.061) 0:00:17.613 ******** 2026-04-11 02:13:56.030753 | orchestrator | =============================================================================== 2026-04-11 02:13:56.030761 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.62s 2026-04-11 02:13:56.030769 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.86s 2026-04-11 02:13:56.030776 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.70s 2026-04-11 02:13:56.030784 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2026-04-11 02:13:56.030791 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.06s 2026-04-11 02:13:56.030800 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2026-04-11 02:13:56.030812 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.81s 2026-04-11 02:13:56.030829 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.61s 2026-04-11 02:13:56.030844 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.50s 2026-04-11 02:13:56.030854 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2026-04-11 02:13:56.030866 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-04-11 02:13:56.387128 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-11 02:13:56.425256 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-11 02:13:56.425353 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-11 02:13:56.501034 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 184 0 --:--:-- --:--:-- --:--:-- 184 2026-04-11 02:13:56.513261 | orchestrator | + osism apply --environment custom workarounds 2026-04-11 02:13:58.656534 | orchestrator | 2026-04-11 02:13:58 | INFO  | Trying to run play workarounds in environment custom 2026-04-11 02:14:08.829507 | orchestrator | 2026-04-11 02:14:08 | INFO  | Task 748d701a-72ed-418c-ae45-463e27279cd2 (workarounds) was prepared for execution. 2026-04-11 02:14:08.829663 | orchestrator | 2026-04-11 02:14:08 | INFO  | It takes a moment until task 748d701a-72ed-418c-ae45-463e27279cd2 (workarounds) has been started and output is visible here. 2026-04-11 02:14:36.370856 | orchestrator | 2026-04-11 02:14:36.370942 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:14:36.370952 | orchestrator | 2026-04-11 02:14:36.370959 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-11 02:14:36.370965 | orchestrator | Saturday 11 April 2026 02:14:13 +0000 (0:00:00.161) 0:00:00.161 ******** 2026-04-11 02:14:36.370972 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-11 02:14:36.370979 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-11 02:14:36.370985 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-11 02:14:36.370991 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-11 02:14:36.370997 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-11 02:14:36.371002 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-11 02:14:36.371008 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-11 02:14:36.371014 | orchestrator | 2026-04-11 02:14:36.371020 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-11 02:14:36.371026 | orchestrator | 2026-04-11 02:14:36.371032 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-11 02:14:36.371038 | orchestrator | Saturday 11 April 2026 02:14:14 +0000 (0:00:00.941) 0:00:01.103 ******** 2026-04-11 02:14:36.371044 | orchestrator | ok: [testbed-manager] 2026-04-11 02:14:36.371064 | orchestrator | 2026-04-11 02:14:36.371071 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-11 02:14:36.371076 | orchestrator | 2026-04-11 02:14:36.371082 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-11 02:14:36.371089 | orchestrator | Saturday 11 April 2026 02:14:17 +0000 (0:00:02.806) 0:00:03.909 ******** 2026-04-11 02:14:36.371094 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:14:36.371100 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:14:36.371106 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:14:36.371112 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:14:36.371117 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:14:36.371123 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:14:36.371129 | orchestrator | 2026-04-11 02:14:36.371135 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-11 02:14:36.371140 | orchestrator | 2026-04-11 02:14:36.371146 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-11 02:14:36.371152 | orchestrator | Saturday 11 April 2026 02:14:19 +0000 (0:00:01.832) 0:00:05.741 ******** 2026-04-11 02:14:36.371158 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 02:14:36.371164 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 02:14:36.371170 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 02:14:36.371176 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 02:14:36.371186 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 02:14:36.371192 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-11 02:14:36.371198 | orchestrator | 2026-04-11 02:14:36.371203 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-11 02:14:36.371209 | orchestrator | Saturday 11 April 2026 02:14:20 +0000 (0:00:01.659) 0:00:07.400 ******** 2026-04-11 02:14:36.371215 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:14:36.371223 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:14:36.371234 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:14:36.371243 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:14:36.371252 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:14:36.371262 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:14:36.371271 | orchestrator | 2026-04-11 02:14:36.371280 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-11 02:14:36.371289 | orchestrator | Saturday 11 April 2026 02:14:24 +0000 (0:00:03.701) 0:00:11.102 ******** 2026-04-11 02:14:36.371298 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:14:36.371307 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:14:36.371317 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:14:36.371327 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:14:36.371336 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:14:36.371346 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:14:36.371355 | orchestrator | 2026-04-11 02:14:36.371365 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-11 02:14:36.371375 | orchestrator | 2026-04-11 02:14:36.371384 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-11 02:14:36.371394 | orchestrator | Saturday 11 April 2026 02:14:25 +0000 (0:00:00.836) 0:00:11.939 ******** 2026-04-11 02:14:36.371405 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:14:36.371415 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:14:36.371423 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:14:36.371430 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:14:36.371436 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:14:36.371455 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:14:36.371468 | orchestrator | changed: [testbed-manager] 2026-04-11 02:14:36.371474 | orchestrator | 2026-04-11 02:14:36.371481 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-11 02:14:36.371488 | orchestrator | Saturday 11 April 2026 02:14:26 +0000 (0:00:01.695) 0:00:13.634 ******** 2026-04-11 02:14:36.371494 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:14:36.371501 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:14:36.371507 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:14:36.371514 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:14:36.371520 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:14:36.371527 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:14:36.371570 | orchestrator | changed: [testbed-manager] 2026-04-11 02:14:36.371582 | orchestrator | 2026-04-11 02:14:36.371592 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-11 02:14:36.371602 | orchestrator | Saturday 11 April 2026 02:14:28 +0000 (0:00:01.740) 0:00:15.374 ******** 2026-04-11 02:14:36.371612 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:14:36.371621 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:14:36.371628 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:14:36.371635 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:14:36.371641 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:14:36.371648 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:14:36.371654 | orchestrator | ok: [testbed-manager] 2026-04-11 02:14:36.371661 | orchestrator | 2026-04-11 02:14:36.371667 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-11 02:14:36.371674 | orchestrator | Saturday 11 April 2026 02:14:30 +0000 (0:00:01.678) 0:00:17.053 ******** 2026-04-11 02:14:36.371680 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:14:36.371687 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:14:36.371694 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:14:36.371700 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:14:36.371706 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:14:36.371713 | orchestrator | changed: [testbed-manager] 2026-04-11 02:14:36.371719 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:14:36.371726 | orchestrator | 2026-04-11 02:14:36.371732 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-11 02:14:36.371739 | orchestrator | Saturday 11 April 2026 02:14:32 +0000 (0:00:02.312) 0:00:19.365 ******** 2026-04-11 02:14:36.371746 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:14:36.371752 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:14:36.371759 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:14:36.371766 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:14:36.371772 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:14:36.371779 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:14:36.371785 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:14:36.371791 | orchestrator | 2026-04-11 02:14:36.371797 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-11 02:14:36.371802 | orchestrator | 2026-04-11 02:14:36.371808 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-11 02:14:36.371814 | orchestrator | Saturday 11 April 2026 02:14:33 +0000 (0:00:00.703) 0:00:20.069 ******** 2026-04-11 02:14:36.371820 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:14:36.371826 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:14:36.371831 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:14:36.371837 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:14:36.371843 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:14:36.371849 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:14:36.371854 | orchestrator | ok: [testbed-manager] 2026-04-11 02:14:36.371860 | orchestrator | 2026-04-11 02:14:36.371866 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:14:36.371873 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:14:36.371880 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:14:36.371894 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:14:36.371900 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:14:36.371906 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:14:36.371912 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:14:36.371918 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:14:36.371923 | orchestrator | 2026-04-11 02:14:36.371929 | orchestrator | 2026-04-11 02:14:36.371935 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:14:36.371941 | orchestrator | Saturday 11 April 2026 02:14:36 +0000 (0:00:02.989) 0:00:23.059 ******** 2026-04-11 02:14:36.371947 | orchestrator | =============================================================================== 2026-04-11 02:14:36.371953 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.70s 2026-04-11 02:14:36.371959 | orchestrator | Install python3-docker -------------------------------------------------- 2.99s 2026-04-11 02:14:36.371965 | orchestrator | Apply netplan configuration --------------------------------------------- 2.81s 2026-04-11 02:14:36.371971 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.31s 2026-04-11 02:14:36.371977 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-04-11 02:14:36.371982 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.74s 2026-04-11 02:14:36.371988 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2026-04-11 02:14:36.371994 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.68s 2026-04-11 02:14:36.372000 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.66s 2026-04-11 02:14:36.372006 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.94s 2026-04-11 02:14:36.372012 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.84s 2026-04-11 02:14:36.372022 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.70s 2026-04-11 02:14:37.138092 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-11 02:14:49.448940 | orchestrator | 2026-04-11 02:14:49 | INFO  | Task 1ce5f57d-4755-4bde-8dcb-b57603faa4d0 (reboot) was prepared for execution. 2026-04-11 02:14:49.449034 | orchestrator | 2026-04-11 02:14:49 | INFO  | It takes a moment until task 1ce5f57d-4755-4bde-8dcb-b57603faa4d0 (reboot) has been started and output is visible here. 2026-04-11 02:15:00.379249 | orchestrator | 2026-04-11 02:15:00.379349 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 02:15:00.379359 | orchestrator | 2026-04-11 02:15:00.379366 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 02:15:00.379389 | orchestrator | Saturday 11 April 2026 02:14:54 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-04-11 02:15:00.379404 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:15:00.379411 | orchestrator | 2026-04-11 02:15:00.379418 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 02:15:00.379425 | orchestrator | Saturday 11 April 2026 02:14:54 +0000 (0:00:00.132) 0:00:00.410 ******** 2026-04-11 02:15:00.379432 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:15:00.379438 | orchestrator | 2026-04-11 02:15:00.379445 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 02:15:00.379476 | orchestrator | Saturday 11 April 2026 02:14:55 +0000 (0:00:00.916) 0:00:01.327 ******** 2026-04-11 02:15:00.379483 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:15:00.379489 | orchestrator | 2026-04-11 02:15:00.379495 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 02:15:00.379547 | orchestrator | 2026-04-11 02:15:00.379553 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 02:15:00.379559 | orchestrator | Saturday 11 April 2026 02:14:55 +0000 (0:00:00.126) 0:00:01.454 ******** 2026-04-11 02:15:00.379564 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:15:00.379569 | orchestrator | 2026-04-11 02:15:00.379575 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 02:15:00.379580 | orchestrator | Saturday 11 April 2026 02:14:55 +0000 (0:00:00.099) 0:00:01.554 ******** 2026-04-11 02:15:00.379586 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:15:00.379591 | orchestrator | 2026-04-11 02:15:00.379597 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 02:15:00.379602 | orchestrator | Saturday 11 April 2026 02:14:56 +0000 (0:00:00.693) 0:00:02.247 ******** 2026-04-11 02:15:00.379608 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:15:00.379613 | orchestrator | 2026-04-11 02:15:00.379619 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 02:15:00.379624 | orchestrator | 2026-04-11 02:15:00.379630 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 02:15:00.379636 | orchestrator | Saturday 11 April 2026 02:14:56 +0000 (0:00:00.125) 0:00:02.373 ******** 2026-04-11 02:15:00.379642 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:15:00.379647 | orchestrator | 2026-04-11 02:15:00.379653 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 02:15:00.379660 | orchestrator | Saturday 11 April 2026 02:14:56 +0000 (0:00:00.250) 0:00:02.623 ******** 2026-04-11 02:15:00.379666 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:15:00.379671 | orchestrator | 2026-04-11 02:15:00.379692 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 02:15:00.379700 | orchestrator | Saturday 11 April 2026 02:14:57 +0000 (0:00:00.690) 0:00:03.314 ******** 2026-04-11 02:15:00.379706 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:15:00.379713 | orchestrator | 2026-04-11 02:15:00.379719 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 02:15:00.379725 | orchestrator | 2026-04-11 02:15:00.379732 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 02:15:00.379738 | orchestrator | Saturday 11 April 2026 02:14:57 +0000 (0:00:00.133) 0:00:03.447 ******** 2026-04-11 02:15:00.379744 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:15:00.379750 | orchestrator | 2026-04-11 02:15:00.379756 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 02:15:00.379763 | orchestrator | Saturday 11 April 2026 02:14:57 +0000 (0:00:00.139) 0:00:03.587 ******** 2026-04-11 02:15:00.379769 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:15:00.379776 | orchestrator | 2026-04-11 02:15:00.379782 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 02:15:00.379789 | orchestrator | Saturday 11 April 2026 02:14:58 +0000 (0:00:00.673) 0:00:04.260 ******** 2026-04-11 02:15:00.379795 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:15:00.379802 | orchestrator | 2026-04-11 02:15:00.379808 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 02:15:00.379814 | orchestrator | 2026-04-11 02:15:00.379820 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 02:15:00.379827 | orchestrator | Saturday 11 April 2026 02:14:58 +0000 (0:00:00.143) 0:00:04.404 ******** 2026-04-11 02:15:00.379833 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:15:00.379839 | orchestrator | 2026-04-11 02:15:00.379845 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 02:15:00.379861 | orchestrator | Saturday 11 April 2026 02:14:58 +0000 (0:00:00.126) 0:00:04.530 ******** 2026-04-11 02:15:00.379868 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:15:00.379874 | orchestrator | 2026-04-11 02:15:00.379880 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 02:15:00.379887 | orchestrator | Saturday 11 April 2026 02:14:59 +0000 (0:00:00.643) 0:00:05.174 ******** 2026-04-11 02:15:00.379893 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:15:00.379900 | orchestrator | 2026-04-11 02:15:00.379907 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-11 02:15:00.379913 | orchestrator | 2026-04-11 02:15:00.379919 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-11 02:15:00.379924 | orchestrator | Saturday 11 April 2026 02:14:59 +0000 (0:00:00.126) 0:00:05.301 ******** 2026-04-11 02:15:00.379931 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:15:00.379936 | orchestrator | 2026-04-11 02:15:00.379941 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-11 02:15:00.379948 | orchestrator | Saturday 11 April 2026 02:14:59 +0000 (0:00:00.125) 0:00:05.426 ******** 2026-04-11 02:15:00.379953 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:15:00.379960 | orchestrator | 2026-04-11 02:15:00.379966 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-11 02:15:00.379973 | orchestrator | Saturday 11 April 2026 02:14:59 +0000 (0:00:00.639) 0:00:06.065 ******** 2026-04-11 02:15:00.380000 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:15:00.380008 | orchestrator | 2026-04-11 02:15:00.380014 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:15:00.380022 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:15:00.380030 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:15:00.380035 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:15:00.380041 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:15:00.380046 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:15:00.380052 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:15:00.380057 | orchestrator | 2026-04-11 02:15:00.380063 | orchestrator | 2026-04-11 02:15:00.380069 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:15:00.380075 | orchestrator | Saturday 11 April 2026 02:14:59 +0000 (0:00:00.039) 0:00:06.104 ******** 2026-04-11 02:15:00.380081 | orchestrator | =============================================================================== 2026-04-11 02:15:00.380087 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.26s 2026-04-11 02:15:00.380094 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.87s 2026-04-11 02:15:00.380100 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.70s 2026-04-11 02:15:00.781007 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-11 02:15:13.219538 | orchestrator | 2026-04-11 02:15:13 | INFO  | Task ecd5316d-2bf4-4ee6-a60b-c5a2c58a2e03 (wait-for-connection) was prepared for execution. 2026-04-11 02:15:13.219655 | orchestrator | 2026-04-11 02:15:13 | INFO  | It takes a moment until task ecd5316d-2bf4-4ee6-a60b-c5a2c58a2e03 (wait-for-connection) has been started and output is visible here. 2026-04-11 02:15:30.721211 | orchestrator | 2026-04-11 02:15:30.721288 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-11 02:15:30.721294 | orchestrator | 2026-04-11 02:15:30.721299 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-11 02:15:30.721304 | orchestrator | Saturday 11 April 2026 02:15:18 +0000 (0:00:00.261) 0:00:00.261 ******** 2026-04-11 02:15:30.721308 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:15:30.721313 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:15:30.721317 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:15:30.721321 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:15:30.721325 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:15:30.721331 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:15:30.721337 | orchestrator | 2026-04-11 02:15:30.721343 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:15:30.721352 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:15:30.721363 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:15:30.721370 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:15:30.721377 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:15:30.721383 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:15:30.721389 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:15:30.721396 | orchestrator | 2026-04-11 02:15:30.721402 | orchestrator | 2026-04-11 02:15:30.721410 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:15:30.721417 | orchestrator | Saturday 11 April 2026 02:15:30 +0000 (0:00:12.288) 0:00:12.550 ******** 2026-04-11 02:15:30.721424 | orchestrator | =============================================================================== 2026-04-11 02:15:30.721429 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.29s 2026-04-11 02:15:31.072310 | orchestrator | + osism apply hddtemp 2026-04-11 02:15:43.389719 | orchestrator | 2026-04-11 02:15:43 | INFO  | Task ad980c5d-f50e-4ac6-bb90-ad69bf7e47a8 (hddtemp) was prepared for execution. 2026-04-11 02:15:43.389808 | orchestrator | 2026-04-11 02:15:43 | INFO  | It takes a moment until task ad980c5d-f50e-4ac6-bb90-ad69bf7e47a8 (hddtemp) has been started and output is visible here. 2026-04-11 02:16:12.741302 | orchestrator | 2026-04-11 02:16:12.741556 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-11 02:16:12.741589 | orchestrator | 2026-04-11 02:16:12.741611 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-11 02:16:12.741632 | orchestrator | Saturday 11 April 2026 02:15:48 +0000 (0:00:00.306) 0:00:00.306 ******** 2026-04-11 02:16:12.741652 | orchestrator | ok: [testbed-manager] 2026-04-11 02:16:12.741674 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:16:12.741694 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:16:12.741716 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:16:12.741736 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:16:12.741757 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:16:12.741777 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:16:12.741797 | orchestrator | 2026-04-11 02:16:12.741817 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-11 02:16:12.741835 | orchestrator | Saturday 11 April 2026 02:15:48 +0000 (0:00:00.808) 0:00:01.115 ******** 2026-04-11 02:16:12.741857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:16:12.741913 | orchestrator | 2026-04-11 02:16:12.741936 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-11 02:16:12.741955 | orchestrator | Saturday 11 April 2026 02:15:50 +0000 (0:00:01.485) 0:00:02.600 ******** 2026-04-11 02:16:12.741975 | orchestrator | ok: [testbed-manager] 2026-04-11 02:16:12.741993 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:16:12.742011 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:16:12.742128 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:16:12.742147 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:16:12.742168 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:16:12.742187 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:16:12.742209 | orchestrator | 2026-04-11 02:16:12.742227 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-11 02:16:12.742245 | orchestrator | Saturday 11 April 2026 02:15:52 +0000 (0:00:01.984) 0:00:04.584 ******** 2026-04-11 02:16:12.742265 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:16:12.742284 | orchestrator | changed: [testbed-manager] 2026-04-11 02:16:12.742303 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:16:12.742320 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:16:12.742339 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:16:12.742358 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:16:12.742377 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:16:12.742428 | orchestrator | 2026-04-11 02:16:12.742442 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-11 02:16:12.742453 | orchestrator | Saturday 11 April 2026 02:15:53 +0000 (0:00:01.231) 0:00:05.816 ******** 2026-04-11 02:16:12.742463 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:16:12.742473 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:16:12.742482 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:16:12.742514 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:16:12.742538 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:16:12.742555 | orchestrator | ok: [testbed-manager] 2026-04-11 02:16:12.742570 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:16:12.742586 | orchestrator | 2026-04-11 02:16:12.742601 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-11 02:16:12.742617 | orchestrator | Saturday 11 April 2026 02:15:55 +0000 (0:00:01.998) 0:00:07.815 ******** 2026-04-11 02:16:12.742633 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:16:12.742649 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:16:12.742665 | orchestrator | changed: [testbed-manager] 2026-04-11 02:16:12.742680 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:16:12.742696 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:16:12.742706 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:16:12.742716 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:16:12.742725 | orchestrator | 2026-04-11 02:16:12.742735 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-11 02:16:12.742744 | orchestrator | Saturday 11 April 2026 02:15:56 +0000 (0:00:00.942) 0:00:08.757 ******** 2026-04-11 02:16:12.742754 | orchestrator | changed: [testbed-manager] 2026-04-11 02:16:12.742763 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:16:12.742773 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:16:12.742783 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:16:12.742792 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:16:12.742802 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:16:12.742811 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:16:12.742821 | orchestrator | 2026-04-11 02:16:12.742831 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-11 02:16:12.742840 | orchestrator | Saturday 11 April 2026 02:16:08 +0000 (0:00:12.061) 0:00:20.818 ******** 2026-04-11 02:16:12.742851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:16:12.742877 | orchestrator | 2026-04-11 02:16:12.742887 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-11 02:16:12.742896 | orchestrator | Saturday 11 April 2026 02:16:10 +0000 (0:00:01.594) 0:00:22.413 ******** 2026-04-11 02:16:12.742906 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:16:12.742915 | orchestrator | changed: [testbed-manager] 2026-04-11 02:16:12.742925 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:16:12.742935 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:16:12.742945 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:16:12.742954 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:16:12.742964 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:16:12.742974 | orchestrator | 2026-04-11 02:16:12.742983 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:16:12.742993 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:16:12.743028 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:16:12.743039 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:16:12.743049 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:16:12.743058 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:16:12.743068 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:16:12.743078 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:16:12.743087 | orchestrator | 2026-04-11 02:16:12.743097 | orchestrator | 2026-04-11 02:16:12.743107 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:16:12.743123 | orchestrator | Saturday 11 April 2026 02:16:12 +0000 (0:00:01.966) 0:00:24.380 ******** 2026-04-11 02:16:12.743134 | orchestrator | =============================================================================== 2026-04-11 02:16:12.743144 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.06s 2026-04-11 02:16:12.743154 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.00s 2026-04-11 02:16:12.743164 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.98s 2026-04-11 02:16:12.743173 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2026-04-11 02:16:12.743183 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.59s 2026-04-11 02:16:12.743193 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.49s 2026-04-11 02:16:12.743202 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.23s 2026-04-11 02:16:12.743212 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.94s 2026-04-11 02:16:12.743222 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.81s 2026-04-11 02:16:13.108311 | orchestrator | ++ semver 9.5.0 7.1.1 2026-04-11 02:16:13.162319 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 02:16:13.162423 | orchestrator | + sudo systemctl restart manager.service 2026-04-11 02:16:27.230117 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 02:16:27.230279 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-11 02:16:27.230305 | orchestrator | + local max_attempts=60 2026-04-11 02:16:27.230323 | orchestrator | + local name=ceph-ansible 2026-04-11 02:16:27.230339 | orchestrator | + local attempt_num=1 2026-04-11 02:16:27.230357 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:16:27.267199 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:16:27.267271 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:16:27.267279 | orchestrator | + sleep 5 2026-04-11 02:16:32.273005 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:16:32.317789 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:16:32.317875 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:16:32.317887 | orchestrator | + sleep 5 2026-04-11 02:16:37.320633 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:16:37.351438 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:16:37.351525 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:16:37.351535 | orchestrator | + sleep 5 2026-04-11 02:16:42.355426 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:16:42.396160 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:16:42.396238 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:16:42.396251 | orchestrator | + sleep 5 2026-04-11 02:16:47.400647 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:16:47.433527 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:16:47.433616 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:16:47.433628 | orchestrator | + sleep 5 2026-04-11 02:16:52.438520 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:16:52.476669 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:16:52.476743 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:16:52.476749 | orchestrator | + sleep 5 2026-04-11 02:16:57.480297 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:16:57.519211 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:16:57.519314 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:16:57.519386 | orchestrator | + sleep 5 2026-04-11 02:17:02.524382 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:17:02.577086 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:02.577199 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:17:02.577215 | orchestrator | + sleep 5 2026-04-11 02:17:07.579903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:17:07.623801 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:07.623907 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:17:07.623925 | orchestrator | + sleep 5 2026-04-11 02:17:12.626858 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:17:12.656155 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:12.656249 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:17:12.656263 | orchestrator | + sleep 5 2026-04-11 02:17:17.660195 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:17:17.701252 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:17.701344 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:17:17.701351 | orchestrator | + sleep 5 2026-04-11 02:17:22.706888 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:17:22.743548 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:22.743672 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:17:22.743687 | orchestrator | + sleep 5 2026-04-11 02:17:27.748788 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:17:27.789980 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:27.790167 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-11 02:17:27.790190 | orchestrator | + sleep 5 2026-04-11 02:17:32.796114 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-11 02:17:32.845997 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:32.846144 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-11 02:17:32.846160 | orchestrator | + local max_attempts=60 2026-04-11 02:17:32.846174 | orchestrator | + local name=kolla-ansible 2026-04-11 02:17:32.846186 | orchestrator | + local attempt_num=1 2026-04-11 02:17:32.846209 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-11 02:17:32.886976 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:32.887072 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-11 02:17:32.887148 | orchestrator | + local max_attempts=60 2026-04-11 02:17:32.887164 | orchestrator | + local name=osism-ansible 2026-04-11 02:17:32.887175 | orchestrator | + local attempt_num=1 2026-04-11 02:17:32.888723 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-11 02:17:32.926924 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 02:17:32.927040 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-11 02:17:32.927057 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-11 02:17:33.124511 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-11 02:17:33.308092 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-11 02:17:33.478698 | orchestrator | ARA in osism-ansible already disabled. 2026-04-11 02:17:33.619353 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-11 02:17:33.619568 | orchestrator | + osism apply gather-facts 2026-04-11 02:17:46.098736 | orchestrator | 2026-04-11 02:17:46 | INFO  | Task 78b50ad4-d23e-4ecd-b105-49c2a04f6e08 (gather-facts) was prepared for execution. 2026-04-11 02:17:46.098872 | orchestrator | 2026-04-11 02:17:46 | INFO  | It takes a moment until task 78b50ad4-d23e-4ecd-b105-49c2a04f6e08 (gather-facts) has been started and output is visible here. 2026-04-11 02:18:00.101167 | orchestrator | 2026-04-11 02:18:00.101304 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 02:18:00.101325 | orchestrator | 2026-04-11 02:18:00.101340 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 02:18:00.101352 | orchestrator | Saturday 11 April 2026 02:17:50 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-04-11 02:18:00.101364 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:18:00.101385 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:18:00.101402 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:18:00.101415 | orchestrator | ok: [testbed-manager] 2026-04-11 02:18:00.101429 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:18:00.101441 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:18:00.101454 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:18:00.101466 | orchestrator | 2026-04-11 02:18:00.101478 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 02:18:00.101492 | orchestrator | 2026-04-11 02:18:00.101505 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 02:18:00.101518 | orchestrator | Saturday 11 April 2026 02:17:59 +0000 (0:00:08.317) 0:00:08.562 ******** 2026-04-11 02:18:00.101532 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:18:00.101547 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:18:00.101561 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:18:00.101575 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:18:00.101584 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:18:00.101592 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:18:00.101601 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:18:00.101609 | orchestrator | 2026-04-11 02:18:00.101617 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:18:00.101625 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:18:00.101635 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:18:00.101643 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:18:00.101652 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:18:00.101660 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:18:00.101668 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:18:00.101702 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 02:18:00.101710 | orchestrator | 2026-04-11 02:18:00.101719 | orchestrator | 2026-04-11 02:18:00.101729 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:18:00.101739 | orchestrator | Saturday 11 April 2026 02:17:59 +0000 (0:00:00.582) 0:00:09.145 ******** 2026-04-11 02:18:00.101748 | orchestrator | =============================================================================== 2026-04-11 02:18:00.101758 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.32s 2026-04-11 02:18:00.101768 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-04-11 02:18:00.520063 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-11 02:18:00.533604 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-11 02:18:00.549569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-11 02:18:00.568653 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-11 02:18:00.590313 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-11 02:18:00.610742 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-11 02:18:00.639572 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-11 02:18:00.656348 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-11 02:18:00.669827 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-11 02:18:00.690716 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-11 02:18:00.707603 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-11 02:18:00.733355 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-11 02:18:00.751490 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-11 02:18:00.772648 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-11 02:18:00.786708 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-11 02:18:00.808278 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-11 02:18:00.826488 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-11 02:18:00.840161 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-11 02:18:00.858868 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-11 02:18:00.872078 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-11 02:18:00.891514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-11 02:18:00.907783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-11 02:18:00.923578 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-11 02:18:00.946304 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-11 02:18:01.385173 | orchestrator | ok: Runtime: 0:25:15.974438 2026-04-11 02:18:01.487968 | 2026-04-11 02:18:01.488154 | TASK [Deploy services] 2026-04-11 02:18:02.179831 | orchestrator | 2026-04-11 02:18:02.180059 | orchestrator | # DEPLOY SERVICES 2026-04-11 02:18:02.180105 | orchestrator | 2026-04-11 02:18:02.180130 | orchestrator | + set -e 2026-04-11 02:18:02.180153 | orchestrator | + echo 2026-04-11 02:18:02.180176 | orchestrator | + echo '# DEPLOY SERVICES' 2026-04-11 02:18:02.180216 | orchestrator | + echo 2026-04-11 02:18:02.180303 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 02:18:02.180330 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 02:18:02.180353 | orchestrator | ++ INTERACTIVE=false 2026-04-11 02:18:02.180378 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 02:18:02.180420 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 02:18:02.180439 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 02:18:02.180462 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 02:18:02.180480 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 02:18:02.180507 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 02:18:02.180523 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 02:18:02.180545 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 02:18:02.180565 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 02:18:02.180618 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 02:18:02.180636 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 02:18:02.180654 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 02:18:02.180674 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 02:18:02.180695 | orchestrator | ++ export ARA=false 2026-04-11 02:18:02.180714 | orchestrator | ++ ARA=false 2026-04-11 02:18:02.180733 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 02:18:02.180752 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 02:18:02.180770 | orchestrator | ++ export TEMPEST=false 2026-04-11 02:18:02.180790 | orchestrator | ++ TEMPEST=false 2026-04-11 02:18:02.180809 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 02:18:02.180820 | orchestrator | ++ IS_ZUUL=true 2026-04-11 02:18:02.180831 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 02:18:02.180843 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 02:18:02.180855 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 02:18:02.180866 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 02:18:02.180877 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 02:18:02.180888 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 02:18:02.180907 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 02:18:02.180926 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 02:18:02.180940 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 02:18:02.180962 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 02:18:02.180973 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-11 02:18:02.192171 | orchestrator | + set -e 2026-04-11 02:18:02.192294 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 02:18:02.192313 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 02:18:02.192323 | orchestrator | ++ INTERACTIVE=false 2026-04-11 02:18:02.192332 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 02:18:02.192340 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 02:18:02.192348 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 02:18:02.192357 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 02:18:02.192365 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 02:18:02.192373 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 02:18:02.192382 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 02:18:02.192391 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 02:18:02.192400 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 02:18:02.192408 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 02:18:02.192416 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 02:18:02.192425 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 02:18:02.192433 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 02:18:02.192441 | orchestrator | ++ export ARA=false 2026-04-11 02:18:02.192451 | orchestrator | ++ ARA=false 2026-04-11 02:18:02.192460 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 02:18:02.192468 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 02:18:02.192477 | orchestrator | ++ export TEMPEST=false 2026-04-11 02:18:02.192489 | orchestrator | ++ TEMPEST=false 2026-04-11 02:18:02.192497 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 02:18:02.192506 | orchestrator | ++ IS_ZUUL=true 2026-04-11 02:18:02.192515 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 02:18:02.193084 | orchestrator | 2026-04-11 02:18:02.193144 | orchestrator | # PULL IMAGES 2026-04-11 02:18:02.193150 | orchestrator | 2026-04-11 02:18:02.193154 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 02:18:02.193160 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 02:18:02.193164 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 02:18:02.193168 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 02:18:02.193174 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 02:18:02.193198 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 02:18:02.193202 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 02:18:02.193206 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 02:18:02.193210 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 02:18:02.193214 | orchestrator | + echo 2026-04-11 02:18:02.193218 | orchestrator | + echo '# PULL IMAGES' 2026-04-11 02:18:02.193222 | orchestrator | + echo 2026-04-11 02:18:02.193231 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-11 02:18:02.254771 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 02:18:02.254861 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-11 02:18:04.349275 | orchestrator | 2026-04-11 02:18:04 | INFO  | Trying to run play pull-images in environment custom 2026-04-11 02:18:14.455371 | orchestrator | 2026-04-11 02:18:14 | INFO  | Task 70a9b637-bf45-4d69-aef2-4ac315a0f1b3 (pull-images) was prepared for execution. 2026-04-11 02:18:14.455462 | orchestrator | 2026-04-11 02:18:14 | INFO  | Task 70a9b637-bf45-4d69-aef2-4ac315a0f1b3 is running in background. No more output. Check ARA for logs. 2026-04-11 02:18:14.823555 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-04-11 02:18:27.053016 | orchestrator | 2026-04-11 02:18:27 | INFO  | Task 192c1b75-ca30-42b8-8b09-4f168033ef91 (cgit) was prepared for execution. 2026-04-11 02:18:27.053113 | orchestrator | 2026-04-11 02:18:27 | INFO  | Task 192c1b75-ca30-42b8-8b09-4f168033ef91 is running in background. No more output. Check ARA for logs. 2026-04-11 02:18:39.815988 | orchestrator | 2026-04-11 02:18:39 | INFO  | Task 39cd67b7-4ae8-41d4-b41b-f5a39e93eb4a (dotfiles) was prepared for execution. 2026-04-11 02:18:39.816112 | orchestrator | 2026-04-11 02:18:39 | INFO  | Task 39cd67b7-4ae8-41d4-b41b-f5a39e93eb4a is running in background. No more output. Check ARA for logs. 2026-04-11 02:18:52.598345 | orchestrator | 2026-04-11 02:18:52 | INFO  | Task 104f01c7-10ee-43a9-88d3-aaa4a8ec2988 (homer) was prepared for execution. 2026-04-11 02:18:52.598448 | orchestrator | 2026-04-11 02:18:52 | INFO  | Task 104f01c7-10ee-43a9-88d3-aaa4a8ec2988 is running in background. No more output. Check ARA for logs. 2026-04-11 02:19:05.494711 | orchestrator | 2026-04-11 02:19:05 | INFO  | Task 1f341b9a-21a7-4c47-921d-dd9161de96fa (phpmyadmin) was prepared for execution. 2026-04-11 02:19:05.495738 | orchestrator | 2026-04-11 02:19:05 | INFO  | Task 1f341b9a-21a7-4c47-921d-dd9161de96fa is running in background. No more output. Check ARA for logs. 2026-04-11 02:19:18.602556 | orchestrator | 2026-04-11 02:19:18 | INFO  | Task 0130fc28-d66c-44fb-b22b-160676280d17 (sosreport) was prepared for execution. 2026-04-11 02:19:18.602657 | orchestrator | 2026-04-11 02:19:18 | INFO  | Task 0130fc28-d66c-44fb-b22b-160676280d17 is running in background. No more output. Check ARA for logs. 2026-04-11 02:19:18.970379 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-04-11 02:19:18.979394 | orchestrator | + set -e 2026-04-11 02:19:18.979478 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 02:19:18.979492 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 02:19:18.979502 | orchestrator | ++ INTERACTIVE=false 2026-04-11 02:19:18.979514 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 02:19:18.979523 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 02:19:18.980831 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 02:19:18.980872 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 02:19:18.980883 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 02:19:18.980891 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 02:19:18.980900 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 02:19:18.980909 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 02:19:18.980919 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 02:19:18.980928 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 02:19:18.980936 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 02:19:18.980946 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 02:19:18.980955 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 02:19:18.980963 | orchestrator | ++ export ARA=false 2026-04-11 02:19:18.980972 | orchestrator | ++ ARA=false 2026-04-11 02:19:18.980981 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 02:19:18.981021 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 02:19:18.981030 | orchestrator | ++ export TEMPEST=false 2026-04-11 02:19:18.981039 | orchestrator | ++ TEMPEST=false 2026-04-11 02:19:18.981047 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 02:19:18.981056 | orchestrator | ++ IS_ZUUL=true 2026-04-11 02:19:18.981081 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 02:19:18.981096 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 02:19:18.981105 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 02:19:18.981113 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 02:19:18.981122 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 02:19:18.981131 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 02:19:18.981139 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 02:19:18.981149 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 02:19:18.981188 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 02:19:18.981209 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 02:19:18.981222 | orchestrator | ++ semver 9.5.0 8.0.3 2026-04-11 02:19:19.039605 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 02:19:19.039675 | orchestrator | + osism apply frr 2026-04-11 02:19:31.400224 | orchestrator | 2026-04-11 02:19:31 | INFO  | Task 662298ca-b928-45d6-a157-a51089e85f14 (frr) was prepared for execution. 2026-04-11 02:19:31.400302 | orchestrator | 2026-04-11 02:19:31 | INFO  | It takes a moment until task 662298ca-b928-45d6-a157-a51089e85f14 (frr) has been started and output is visible here. 2026-04-11 02:20:12.255654 | orchestrator | 2026-04-11 02:20:12.255747 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-11 02:20:12.255758 | orchestrator | 2026-04-11 02:20:12.255763 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-11 02:20:12.255773 | orchestrator | Saturday 11 April 2026 02:19:37 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-04-11 02:20:12.255778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 02:20:12.255783 | orchestrator | 2026-04-11 02:20:12.255788 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-11 02:20:12.255792 | orchestrator | Saturday 11 April 2026 02:19:37 +0000 (0:00:00.233) 0:00:00.553 ******** 2026-04-11 02:20:12.255796 | orchestrator | changed: [testbed-manager] 2026-04-11 02:20:12.255800 | orchestrator | 2026-04-11 02:20:12.255804 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-11 02:20:12.255810 | orchestrator | Saturday 11 April 2026 02:19:39 +0000 (0:00:02.284) 0:00:02.837 ******** 2026-04-11 02:20:12.255813 | orchestrator | changed: [testbed-manager] 2026-04-11 02:20:12.255817 | orchestrator | 2026-04-11 02:20:12.255821 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-11 02:20:12.255825 | orchestrator | Saturday 11 April 2026 02:19:59 +0000 (0:00:20.277) 0:00:23.115 ******** 2026-04-11 02:20:12.255829 | orchestrator | ok: [testbed-manager] 2026-04-11 02:20:12.255833 | orchestrator | 2026-04-11 02:20:12.255837 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-11 02:20:12.255841 | orchestrator | Saturday 11 April 2026 02:20:01 +0000 (0:00:01.337) 0:00:24.453 ******** 2026-04-11 02:20:12.255845 | orchestrator | changed: [testbed-manager] 2026-04-11 02:20:12.255848 | orchestrator | 2026-04-11 02:20:12.255852 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-11 02:20:12.255856 | orchestrator | Saturday 11 April 2026 02:20:02 +0000 (0:00:01.303) 0:00:25.757 ******** 2026-04-11 02:20:12.255860 | orchestrator | ok: [testbed-manager] 2026-04-11 02:20:12.255863 | orchestrator | 2026-04-11 02:20:12.255867 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-11 02:20:12.255872 | orchestrator | Saturday 11 April 2026 02:20:04 +0000 (0:00:01.408) 0:00:27.165 ******** 2026-04-11 02:20:12.255876 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:20:12.255879 | orchestrator | 2026-04-11 02:20:12.255883 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-11 02:20:12.255887 | orchestrator | Saturday 11 April 2026 02:20:04 +0000 (0:00:00.169) 0:00:27.335 ******** 2026-04-11 02:20:12.255907 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:20:12.255911 | orchestrator | 2026-04-11 02:20:12.255915 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-11 02:20:12.255919 | orchestrator | Saturday 11 April 2026 02:20:04 +0000 (0:00:00.182) 0:00:27.517 ******** 2026-04-11 02:20:12.255923 | orchestrator | changed: [testbed-manager] 2026-04-11 02:20:12.255926 | orchestrator | 2026-04-11 02:20:12.255930 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-11 02:20:12.255934 | orchestrator | Saturday 11 April 2026 02:20:05 +0000 (0:00:01.098) 0:00:28.616 ******** 2026-04-11 02:20:12.255938 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-11 02:20:12.255942 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-11 02:20:12.255946 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-11 02:20:12.255950 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-11 02:20:12.255954 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-11 02:20:12.255958 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-11 02:20:12.255961 | orchestrator | 2026-04-11 02:20:12.255965 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-11 02:20:12.255969 | orchestrator | Saturday 11 April 2026 02:20:08 +0000 (0:00:02.733) 0:00:31.350 ******** 2026-04-11 02:20:12.255973 | orchestrator | ok: [testbed-manager] 2026-04-11 02:20:12.255976 | orchestrator | 2026-04-11 02:20:12.255980 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-11 02:20:12.255984 | orchestrator | Saturday 11 April 2026 02:20:10 +0000 (0:00:02.094) 0:00:33.445 ******** 2026-04-11 02:20:12.255988 | orchestrator | changed: [testbed-manager] 2026-04-11 02:20:12.255991 | orchestrator | 2026-04-11 02:20:12.255995 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:20:12.255999 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:20:12.256003 | orchestrator | 2026-04-11 02:20:12.256007 | orchestrator | 2026-04-11 02:20:12.256015 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:20:12.256019 | orchestrator | Saturday 11 April 2026 02:20:11 +0000 (0:00:01.548) 0:00:34.993 ******** 2026-04-11 02:20:12.256023 | orchestrator | =============================================================================== 2026-04-11 02:20:12.256026 | orchestrator | osism.services.frr : Install frr package ------------------------------- 20.28s 2026-04-11 02:20:12.256030 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.73s 2026-04-11 02:20:12.256034 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.29s 2026-04-11 02:20:12.256038 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.09s 2026-04-11 02:20:12.256041 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.55s 2026-04-11 02:20:12.256055 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.41s 2026-04-11 02:20:12.256059 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.34s 2026-04-11 02:20:12.256063 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.30s 2026-04-11 02:20:12.256067 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.10s 2026-04-11 02:20:12.256071 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-04-11 02:20:12.256074 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-04-11 02:20:12.256078 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-04-11 02:20:12.698257 | orchestrator | + osism apply kubernetes 2026-04-11 02:20:15.325740 | orchestrator | 2026-04-11 02:20:15 | INFO  | Task adbf4e50-9a80-48e6-9360-8a1f0c95605f (kubernetes) was prepared for execution. 2026-04-11 02:20:15.325819 | orchestrator | 2026-04-11 02:20:15 | INFO  | It takes a moment until task adbf4e50-9a80-48e6-9360-8a1f0c95605f (kubernetes) has been started and output is visible here. 2026-04-11 02:20:43.707426 | orchestrator | 2026-04-11 02:20:43.707549 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-11 02:20:43.707570 | orchestrator | 2026-04-11 02:20:43.707584 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-11 02:20:43.707599 | orchestrator | Saturday 11 April 2026 02:20:21 +0000 (0:00:00.254) 0:00:00.254 ******** 2026-04-11 02:20:43.707613 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:20:43.707629 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:20:43.707641 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:20:43.707656 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:20:43.707669 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:20:43.707682 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:20:43.707695 | orchestrator | 2026-04-11 02:20:43.707708 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-11 02:20:43.707722 | orchestrator | Saturday 11 April 2026 02:20:22 +0000 (0:00:00.899) 0:00:01.154 ******** 2026-04-11 02:20:43.707735 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.707749 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.707763 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.707776 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.707790 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.707800 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.707808 | orchestrator | 2026-04-11 02:20:43.707816 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-11 02:20:43.707826 | orchestrator | Saturday 11 April 2026 02:20:23 +0000 (0:00:00.860) 0:00:02.015 ******** 2026-04-11 02:20:43.707834 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.707842 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.707850 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.707858 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.707866 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.707874 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.707882 | orchestrator | 2026-04-11 02:20:43.707890 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-11 02:20:43.707897 | orchestrator | Saturday 11 April 2026 02:20:24 +0000 (0:00:00.857) 0:00:02.872 ******** 2026-04-11 02:20:43.707905 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:20:43.707913 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:20:43.707921 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:20:43.707934 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:20:43.707943 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:20:43.707953 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:20:43.707965 | orchestrator | 2026-04-11 02:20:43.707978 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-11 02:20:43.707992 | orchestrator | Saturday 11 April 2026 02:20:26 +0000 (0:00:01.808) 0:00:04.681 ******** 2026-04-11 02:20:43.708005 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:20:43.708018 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:20:43.708031 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:20:43.708045 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:20:43.708058 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:20:43.708073 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:20:43.708157 | orchestrator | 2026-04-11 02:20:43.708175 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-11 02:20:43.708189 | orchestrator | Saturday 11 April 2026 02:20:28 +0000 (0:00:01.677) 0:00:06.358 ******** 2026-04-11 02:20:43.708203 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:20:43.708248 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:20:43.708263 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:20:43.708278 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:20:43.708291 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:20:43.708304 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:20:43.708319 | orchestrator | 2026-04-11 02:20:43.708346 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-11 02:20:43.708361 | orchestrator | Saturday 11 April 2026 02:20:29 +0000 (0:00:01.031) 0:00:07.389 ******** 2026-04-11 02:20:43.708374 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.708387 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.708400 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.708414 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.708426 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.708440 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.708454 | orchestrator | 2026-04-11 02:20:43.708468 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-11 02:20:43.708482 | orchestrator | Saturday 11 April 2026 02:20:29 +0000 (0:00:00.681) 0:00:08.071 ******** 2026-04-11 02:20:43.708492 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.708500 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.708508 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.708516 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.708524 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.708532 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.708539 | orchestrator | 2026-04-11 02:20:43.708549 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-11 02:20:43.708562 | orchestrator | Saturday 11 April 2026 02:20:30 +0000 (0:00:00.898) 0:00:08.970 ******** 2026-04-11 02:20:43.708575 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 02:20:43.708588 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 02:20:43.708600 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.708614 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 02:20:43.708626 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 02:20:43.708639 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.708653 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 02:20:43.708666 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 02:20:43.708679 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.708693 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 02:20:43.708729 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 02:20:43.708738 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.708746 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 02:20:43.708754 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 02:20:43.708762 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.708770 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 02:20:43.708778 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 02:20:43.708786 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.708794 | orchestrator | 2026-04-11 02:20:43.708802 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-11 02:20:43.708810 | orchestrator | Saturday 11 April 2026 02:20:31 +0000 (0:00:00.725) 0:00:09.695 ******** 2026-04-11 02:20:43.708818 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.708826 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.708834 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.708851 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.708859 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.708867 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.708875 | orchestrator | 2026-04-11 02:20:43.708883 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-11 02:20:43.708892 | orchestrator | Saturday 11 April 2026 02:20:32 +0000 (0:00:01.308) 0:00:11.004 ******** 2026-04-11 02:20:43.708900 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:20:43.708908 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:20:43.708916 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:20:43.708924 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:20:43.708932 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:20:43.708939 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:20:43.708947 | orchestrator | 2026-04-11 02:20:43.708955 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-11 02:20:43.708964 | orchestrator | Saturday 11 April 2026 02:20:33 +0000 (0:00:00.923) 0:00:11.927 ******** 2026-04-11 02:20:43.708971 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:20:43.708979 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:20:43.708987 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:20:43.708995 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:20:43.709003 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:20:43.709011 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:20:43.709018 | orchestrator | 2026-04-11 02:20:43.709026 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-11 02:20:43.709034 | orchestrator | Saturday 11 April 2026 02:20:39 +0000 (0:00:05.960) 0:00:17.888 ******** 2026-04-11 02:20:43.709042 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.709056 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.709064 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.709072 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.709080 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.709113 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.709122 | orchestrator | 2026-04-11 02:20:43.709131 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-11 02:20:43.709139 | orchestrator | Saturday 11 April 2026 02:20:40 +0000 (0:00:00.973) 0:00:18.862 ******** 2026-04-11 02:20:43.709147 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.709155 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.709162 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.709170 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.709178 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.709186 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.709194 | orchestrator | 2026-04-11 02:20:43.709202 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-11 02:20:43.709211 | orchestrator | Saturday 11 April 2026 02:20:41 +0000 (0:00:01.462) 0:00:20.324 ******** 2026-04-11 02:20:43.709219 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.709227 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.709235 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.709242 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.709250 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.709258 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.709266 | orchestrator | 2026-04-11 02:20:43.709274 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-11 02:20:43.709282 | orchestrator | Saturday 11 April 2026 02:20:42 +0000 (0:00:00.686) 0:00:21.011 ******** 2026-04-11 02:20:43.709289 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-11 02:20:43.709303 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-11 02:20:43.709315 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:20:43.709328 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-11 02:20:43.709352 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-11 02:20:43.709367 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:20:43.709379 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-11 02:20:43.709393 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-11 02:20:43.709405 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:20:43.709418 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-11 02:20:43.709430 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-11 02:20:43.709442 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:20:43.709450 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-11 02:20:43.709458 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-11 02:20:43.709465 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:20:43.709473 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-11 02:20:43.709481 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-11 02:20:43.709489 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:20:43.709497 | orchestrator | 2026-04-11 02:20:43.709505 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-11 02:20:43.709525 | orchestrator | Saturday 11 April 2026 02:20:43 +0000 (0:00:01.016) 0:00:22.027 ******** 2026-04-11 02:22:00.985909 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:22:00.986172 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:22:00.986207 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:22:00.986227 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.986246 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.986264 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:22:00.986284 | orchestrator | 2026-04-11 02:22:00.986297 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-11 02:22:00.986309 | orchestrator | Saturday 11 April 2026 02:20:44 +0000 (0:00:00.843) 0:00:22.871 ******** 2026-04-11 02:22:00.986322 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:22:00.986339 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:22:00.986355 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:22:00.986370 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:22:00.986385 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.986399 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.986416 | orchestrator | 2026-04-11 02:22:00.986433 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-11 02:22:00.986450 | orchestrator | 2026-04-11 02:22:00.986467 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-11 02:22:00.986485 | orchestrator | Saturday 11 April 2026 02:20:46 +0000 (0:00:01.550) 0:00:24.422 ******** 2026-04-11 02:22:00.986502 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:00.986520 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:00.986535 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:00.986552 | orchestrator | 2026-04-11 02:22:00.986570 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-11 02:22:00.986588 | orchestrator | Saturday 11 April 2026 02:20:47 +0000 (0:00:01.684) 0:00:26.107 ******** 2026-04-11 02:22:00.986605 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:00.986623 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:00.986639 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:00.986657 | orchestrator | 2026-04-11 02:22:00.986674 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-11 02:22:00.986691 | orchestrator | Saturday 11 April 2026 02:20:49 +0000 (0:00:01.276) 0:00:27.383 ******** 2026-04-11 02:22:00.986708 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:00.986724 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:00.986737 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:00.986750 | orchestrator | 2026-04-11 02:22:00.986767 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-11 02:22:00.986782 | orchestrator | Saturday 11 April 2026 02:20:49 +0000 (0:00:00.919) 0:00:28.303 ******** 2026-04-11 02:22:00.986830 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:00.986848 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:00.986865 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:00.986881 | orchestrator | 2026-04-11 02:22:00.986897 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-11 02:22:00.986914 | orchestrator | Saturday 11 April 2026 02:20:50 +0000 (0:00:00.868) 0:00:29.171 ******** 2026-04-11 02:22:00.986930 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:22:00.986945 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.986961 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.986977 | orchestrator | 2026-04-11 02:22:00.986994 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-11 02:22:00.987056 | orchestrator | Saturday 11 April 2026 02:20:51 +0000 (0:00:00.509) 0:00:29.681 ******** 2026-04-11 02:22:00.987070 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:00.987081 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:00.987097 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:00.987114 | orchestrator | 2026-04-11 02:22:00.987130 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-11 02:22:00.987145 | orchestrator | Saturday 11 April 2026 02:20:52 +0000 (0:00:01.383) 0:00:31.065 ******** 2026-04-11 02:22:00.987161 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:00.987178 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:00.987196 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:00.987213 | orchestrator | 2026-04-11 02:22:00.987228 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-11 02:22:00.987246 | orchestrator | Saturday 11 April 2026 02:20:54 +0000 (0:00:01.481) 0:00:32.546 ******** 2026-04-11 02:22:00.987263 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:22:00.987278 | orchestrator | 2026-04-11 02:22:00.987294 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-11 02:22:00.987310 | orchestrator | Saturday 11 April 2026 02:20:54 +0000 (0:00:00.552) 0:00:33.099 ******** 2026-04-11 02:22:00.987326 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:00.987342 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:00.987359 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:00.987376 | orchestrator | 2026-04-11 02:22:00.987393 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-11 02:22:00.987408 | orchestrator | Saturday 11 April 2026 02:20:57 +0000 (0:00:02.528) 0:00:35.627 ******** 2026-04-11 02:22:00.987425 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:00.987441 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.987457 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.987472 | orchestrator | 2026-04-11 02:22:00.987490 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-11 02:22:00.987505 | orchestrator | Saturday 11 April 2026 02:20:57 +0000 (0:00:00.703) 0:00:36.330 ******** 2026-04-11 02:22:00.987521 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.987538 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.987552 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:00.987568 | orchestrator | 2026-04-11 02:22:00.987584 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-11 02:22:00.987599 | orchestrator | Saturday 11 April 2026 02:20:58 +0000 (0:00:00.844) 0:00:37.175 ******** 2026-04-11 02:22:00.987615 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.987630 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.987646 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:00.987662 | orchestrator | 2026-04-11 02:22:00.987679 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-11 02:22:00.987721 | orchestrator | Saturday 11 April 2026 02:21:00 +0000 (0:00:01.391) 0:00:38.567 ******** 2026-04-11 02:22:00.987740 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:22:00.987774 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.987791 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.987807 | orchestrator | 2026-04-11 02:22:00.987824 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-11 02:22:00.987841 | orchestrator | Saturday 11 April 2026 02:21:00 +0000 (0:00:00.627) 0:00:39.195 ******** 2026-04-11 02:22:00.987856 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:22:00.987872 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.987888 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.987905 | orchestrator | 2026-04-11 02:22:00.987921 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-11 02:22:00.987937 | orchestrator | Saturday 11 April 2026 02:21:01 +0000 (0:00:00.418) 0:00:39.613 ******** 2026-04-11 02:22:00.987952 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:00.987968 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:00.987984 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:00.988001 | orchestrator | 2026-04-11 02:22:00.988028 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-11 02:22:00.988124 | orchestrator | Saturday 11 April 2026 02:21:02 +0000 (0:00:01.151) 0:00:40.764 ******** 2026-04-11 02:22:00.988141 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:00.988158 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:00.988174 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:00.988190 | orchestrator | 2026-04-11 02:22:00.988206 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-11 02:22:00.988221 | orchestrator | Saturday 11 April 2026 02:21:05 +0000 (0:00:02.765) 0:00:43.530 ******** 2026-04-11 02:22:00.988238 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:00.988254 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:00.988271 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:00.988292 | orchestrator | 2026-04-11 02:22:00.988304 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-11 02:22:00.988314 | orchestrator | Saturday 11 April 2026 02:21:05 +0000 (0:00:00.357) 0:00:43.887 ******** 2026-04-11 02:22:00.988325 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 02:22:00.988337 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 02:22:00.988347 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 02:22:00.988356 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 02:22:00.988366 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 02:22:00.988376 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 02:22:00.988387 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-11 02:22:00.988404 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-11 02:22:00.988420 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-11 02:22:00.988436 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-11 02:22:00.988451 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-11 02:22:00.988479 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-11 02:22:00.988495 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-11 02:22:00.988512 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-11 02:22:00.988528 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-11 02:22:00.988546 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:00.988563 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:00.988579 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:00.988595 | orchestrator | 2026-04-11 02:22:00.988620 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-11 02:22:00.988637 | orchestrator | Saturday 11 April 2026 02:21:59 +0000 (0:00:54.073) 0:01:37.961 ******** 2026-04-11 02:22:00.988654 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:22:00.988670 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:00.988684 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:00.988694 | orchestrator | 2026-04-11 02:22:00.988705 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-11 02:22:00.988722 | orchestrator | Saturday 11 April 2026 02:21:59 +0000 (0:00:00.367) 0:01:38.328 ******** 2026-04-11 02:22:00.988753 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:40.813470 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:40.813582 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:40.813598 | orchestrator | 2026-04-11 02:22:40.813609 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-11 02:22:40.813621 | orchestrator | Saturday 11 April 2026 02:22:00 +0000 (0:00:00.984) 0:01:39.312 ******** 2026-04-11 02:22:40.813631 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:40.813642 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:40.813652 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:40.813662 | orchestrator | 2026-04-11 02:22:40.813672 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-11 02:22:40.813682 | orchestrator | Saturday 11 April 2026 02:22:02 +0000 (0:00:01.324) 0:01:40.636 ******** 2026-04-11 02:22:40.813692 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:40.813702 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:40.813711 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:40.813721 | orchestrator | 2026-04-11 02:22:40.813731 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-11 02:22:40.813747 | orchestrator | Saturday 11 April 2026 02:22:25 +0000 (0:00:23.391) 0:02:04.028 ******** 2026-04-11 02:22:40.813763 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:40.813780 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:40.813796 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:40.813821 | orchestrator | 2026-04-11 02:22:40.813842 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-11 02:22:40.813866 | orchestrator | Saturday 11 April 2026 02:22:26 +0000 (0:00:00.639) 0:02:04.668 ******** 2026-04-11 02:22:40.813884 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:40.813901 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:40.813918 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:40.813934 | orchestrator | 2026-04-11 02:22:40.813948 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-11 02:22:40.813958 | orchestrator | Saturday 11 April 2026 02:22:26 +0000 (0:00:00.644) 0:02:05.313 ******** 2026-04-11 02:22:40.813968 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:40.813978 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:40.813987 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:40.813997 | orchestrator | 2026-04-11 02:22:40.814116 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-11 02:22:40.814178 | orchestrator | Saturday 11 April 2026 02:22:27 +0000 (0:00:00.647) 0:02:05.960 ******** 2026-04-11 02:22:40.814197 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:40.814207 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:40.814217 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:40.814226 | orchestrator | 2026-04-11 02:22:40.814236 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-11 02:22:40.814246 | orchestrator | Saturday 11 April 2026 02:22:28 +0000 (0:00:00.880) 0:02:06.841 ******** 2026-04-11 02:22:40.814255 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:40.814265 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:40.814274 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:40.814283 | orchestrator | 2026-04-11 02:22:40.814293 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-11 02:22:40.814303 | orchestrator | Saturday 11 April 2026 02:22:28 +0000 (0:00:00.350) 0:02:07.192 ******** 2026-04-11 02:22:40.814312 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:40.814322 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:40.814331 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:40.814341 | orchestrator | 2026-04-11 02:22:40.814350 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-11 02:22:40.814360 | orchestrator | Saturday 11 April 2026 02:22:29 +0000 (0:00:00.675) 0:02:07.867 ******** 2026-04-11 02:22:40.814369 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:40.814379 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:40.814389 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:40.814398 | orchestrator | 2026-04-11 02:22:40.814408 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-11 02:22:40.814417 | orchestrator | Saturday 11 April 2026 02:22:30 +0000 (0:00:00.677) 0:02:08.545 ******** 2026-04-11 02:22:40.814427 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:40.814437 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:40.814446 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:40.814456 | orchestrator | 2026-04-11 02:22:40.814466 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-11 02:22:40.814476 | orchestrator | Saturday 11 April 2026 02:22:31 +0000 (0:00:01.147) 0:02:09.693 ******** 2026-04-11 02:22:40.814488 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:22:40.814497 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:22:40.814507 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:22:40.814516 | orchestrator | 2026-04-11 02:22:40.814526 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-11 02:22:40.814537 | orchestrator | Saturday 11 April 2026 02:22:32 +0000 (0:00:00.889) 0:02:10.582 ******** 2026-04-11 02:22:40.814554 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:22:40.814569 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:40.814585 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:40.814609 | orchestrator | 2026-04-11 02:22:40.814632 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-11 02:22:40.814653 | orchestrator | Saturday 11 April 2026 02:22:32 +0000 (0:00:00.393) 0:02:10.975 ******** 2026-04-11 02:22:40.814669 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:22:40.814683 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:22:40.814698 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:22:40.814712 | orchestrator | 2026-04-11 02:22:40.814726 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-11 02:22:40.814741 | orchestrator | Saturday 11 April 2026 02:22:32 +0000 (0:00:00.318) 0:02:11.294 ******** 2026-04-11 02:22:40.814757 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:40.814774 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:40.814791 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:40.814807 | orchestrator | 2026-04-11 02:22:40.814823 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-11 02:22:40.814840 | orchestrator | Saturday 11 April 2026 02:22:33 +0000 (0:00:00.738) 0:02:12.033 ******** 2026-04-11 02:22:40.814872 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:22:40.814887 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:22:40.814929 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:22:40.814946 | orchestrator | 2026-04-11 02:22:40.814964 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-11 02:22:40.814982 | orchestrator | Saturday 11 April 2026 02:22:34 +0000 (0:00:00.902) 0:02:12.936 ******** 2026-04-11 02:22:40.814999 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 02:22:40.815080 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 02:22:40.815097 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 02:22:40.815113 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 02:22:40.815129 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 02:22:40.815145 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 02:22:40.815161 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 02:22:40.815179 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 02:22:40.815196 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 02:22:40.815212 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-11 02:22:40.815230 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 02:22:40.815246 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 02:22:40.815263 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-11 02:22:40.815279 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 02:22:40.815295 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 02:22:40.815311 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 02:22:40.815326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 02:22:40.815341 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 02:22:40.815355 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 02:22:40.815370 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 02:22:40.815385 | orchestrator | 2026-04-11 02:22:40.815400 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-11 02:22:40.815416 | orchestrator | 2026-04-11 02:22:40.815431 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-11 02:22:40.815447 | orchestrator | Saturday 11 April 2026 02:22:37 +0000 (0:00:02.932) 0:02:15.868 ******** 2026-04-11 02:22:40.815462 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:22:40.815477 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:22:40.815493 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:22:40.815508 | orchestrator | 2026-04-11 02:22:40.815545 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-11 02:22:40.815562 | orchestrator | Saturday 11 April 2026 02:22:37 +0000 (0:00:00.352) 0:02:16.220 ******** 2026-04-11 02:22:40.815578 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:22:40.815594 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:22:40.815611 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:22:40.815642 | orchestrator | 2026-04-11 02:22:40.815658 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-11 02:22:40.815674 | orchestrator | Saturday 11 April 2026 02:22:38 +0000 (0:00:00.936) 0:02:17.157 ******** 2026-04-11 02:22:40.815690 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:22:40.815706 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:22:40.815723 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:22:40.815739 | orchestrator | 2026-04-11 02:22:40.815755 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-11 02:22:40.815771 | orchestrator | Saturday 11 April 2026 02:22:39 +0000 (0:00:00.331) 0:02:17.488 ******** 2026-04-11 02:22:40.815787 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:22:40.815804 | orchestrator | 2026-04-11 02:22:40.815820 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-11 02:22:40.815835 | orchestrator | Saturday 11 April 2026 02:22:39 +0000 (0:00:00.537) 0:02:18.026 ******** 2026-04-11 02:22:40.815851 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:22:40.815867 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:22:40.815884 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:22:40.815900 | orchestrator | 2026-04-11 02:22:40.815914 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-11 02:22:40.815931 | orchestrator | Saturday 11 April 2026 02:22:40 +0000 (0:00:00.582) 0:02:18.608 ******** 2026-04-11 02:22:40.815947 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:22:40.815963 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:22:40.815980 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:22:40.815995 | orchestrator | 2026-04-11 02:22:40.816039 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-11 02:22:40.816056 | orchestrator | Saturday 11 April 2026 02:22:40 +0000 (0:00:00.340) 0:02:18.948 ******** 2026-04-11 02:22:40.816089 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:24:22.918669 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:24:22.918755 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:24:22.918762 | orchestrator | 2026-04-11 02:24:22.918769 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-11 02:24:22.918775 | orchestrator | Saturday 11 April 2026 02:22:40 +0000 (0:00:00.354) 0:02:19.302 ******** 2026-04-11 02:24:22.918781 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:24:22.918786 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:24:22.918791 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:24:22.918796 | orchestrator | 2026-04-11 02:24:22.918801 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-11 02:24:22.918806 | orchestrator | Saturday 11 April 2026 02:22:41 +0000 (0:00:00.631) 0:02:19.934 ******** 2026-04-11 02:24:22.918810 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:24:22.918815 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:24:22.918820 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:24:22.918825 | orchestrator | 2026-04-11 02:24:22.918830 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-11 02:24:22.918834 | orchestrator | Saturday 11 April 2026 02:22:42 +0000 (0:00:01.410) 0:02:21.345 ******** 2026-04-11 02:24:22.918839 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:24:22.918844 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:24:22.918849 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:24:22.918854 | orchestrator | 2026-04-11 02:24:22.918858 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-11 02:24:22.918863 | orchestrator | Saturday 11 April 2026 02:22:44 +0000 (0:00:01.199) 0:02:22.545 ******** 2026-04-11 02:24:22.918868 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:24:22.918873 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:24:22.918878 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:24:22.918882 | orchestrator | 2026-04-11 02:24:22.918887 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-11 02:24:22.918909 | orchestrator | 2026-04-11 02:24:22.918915 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-11 02:24:22.918920 | orchestrator | Saturday 11 April 2026 02:22:54 +0000 (0:00:09.933) 0:02:32.478 ******** 2026-04-11 02:24:22.918925 | orchestrator | ok: [testbed-manager] 2026-04-11 02:24:22.918987 | orchestrator | 2026-04-11 02:24:22.918993 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-11 02:24:22.918998 | orchestrator | Saturday 11 April 2026 02:22:55 +0000 (0:00:01.078) 0:02:33.557 ******** 2026-04-11 02:24:22.919002 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:22.919007 | orchestrator | 2026-04-11 02:24:22.919013 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-11 02:24:22.919017 | orchestrator | Saturday 11 April 2026 02:22:55 +0000 (0:00:00.461) 0:02:34.018 ******** 2026-04-11 02:24:22.919022 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-11 02:24:22.919027 | orchestrator | 2026-04-11 02:24:22.919032 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-11 02:24:22.919037 | orchestrator | Saturday 11 April 2026 02:22:56 +0000 (0:00:00.554) 0:02:34.572 ******** 2026-04-11 02:24:22.919041 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:22.919046 | orchestrator | 2026-04-11 02:24:22.919051 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-11 02:24:22.919056 | orchestrator | Saturday 11 April 2026 02:22:57 +0000 (0:00:00.936) 0:02:35.509 ******** 2026-04-11 02:24:22.919060 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:22.919065 | orchestrator | 2026-04-11 02:24:22.919070 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-11 02:24:22.919075 | orchestrator | Saturday 11 April 2026 02:22:57 +0000 (0:00:00.657) 0:02:36.167 ******** 2026-04-11 02:24:22.919079 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 02:24:22.919084 | orchestrator | 2026-04-11 02:24:22.919089 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-11 02:24:22.919094 | orchestrator | Saturday 11 April 2026 02:22:59 +0000 (0:00:01.758) 0:02:37.925 ******** 2026-04-11 02:24:22.919098 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 02:24:22.919103 | orchestrator | 2026-04-11 02:24:22.919121 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-11 02:24:22.919128 | orchestrator | Saturday 11 April 2026 02:23:00 +0000 (0:00:00.964) 0:02:38.890 ******** 2026-04-11 02:24:22.919133 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:22.919138 | orchestrator | 2026-04-11 02:24:22.919143 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-11 02:24:22.919148 | orchestrator | Saturday 11 April 2026 02:23:01 +0000 (0:00:00.485) 0:02:39.376 ******** 2026-04-11 02:24:22.919152 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:22.919157 | orchestrator | 2026-04-11 02:24:22.919162 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-11 02:24:22.919167 | orchestrator | 2026-04-11 02:24:22.919171 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-11 02:24:22.919177 | orchestrator | Saturday 11 April 2026 02:23:01 +0000 (0:00:00.533) 0:02:39.909 ******** 2026-04-11 02:24:22.919181 | orchestrator | ok: [testbed-manager] 2026-04-11 02:24:22.919186 | orchestrator | 2026-04-11 02:24:22.919191 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-11 02:24:22.919196 | orchestrator | Saturday 11 April 2026 02:23:01 +0000 (0:00:00.408) 0:02:40.317 ******** 2026-04-11 02:24:22.919201 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 02:24:22.919206 | orchestrator | 2026-04-11 02:24:22.919211 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-11 02:24:22.919215 | orchestrator | Saturday 11 April 2026 02:23:02 +0000 (0:00:00.274) 0:02:40.591 ******** 2026-04-11 02:24:22.919220 | orchestrator | ok: [testbed-manager] 2026-04-11 02:24:22.919225 | orchestrator | 2026-04-11 02:24:22.919235 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-11 02:24:22.919240 | orchestrator | Saturday 11 April 2026 02:23:03 +0000 (0:00:00.863) 0:02:41.455 ******** 2026-04-11 02:24:22.919246 | orchestrator | ok: [testbed-manager] 2026-04-11 02:24:22.919251 | orchestrator | 2026-04-11 02:24:22.919268 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-11 02:24:22.919274 | orchestrator | Saturday 11 April 2026 02:23:05 +0000 (0:00:01.905) 0:02:43.361 ******** 2026-04-11 02:24:22.919279 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:22.919285 | orchestrator | 2026-04-11 02:24:22.919290 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-11 02:24:22.919298 | orchestrator | Saturday 11 April 2026 02:23:05 +0000 (0:00:00.854) 0:02:44.216 ******** 2026-04-11 02:24:22.919307 | orchestrator | ok: [testbed-manager] 2026-04-11 02:24:22.919315 | orchestrator | 2026-04-11 02:24:22.919323 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-11 02:24:22.919331 | orchestrator | Saturday 11 April 2026 02:23:06 +0000 (0:00:00.534) 0:02:44.751 ******** 2026-04-11 02:24:22.919339 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:22.919347 | orchestrator | 2026-04-11 02:24:22.919355 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-11 02:24:22.919363 | orchestrator | Saturday 11 April 2026 02:23:14 +0000 (0:00:08.568) 0:02:53.319 ******** 2026-04-11 02:24:22.919371 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:22.919378 | orchestrator | 2026-04-11 02:24:22.919386 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-11 02:24:22.919394 | orchestrator | Saturday 11 April 2026 02:23:28 +0000 (0:00:13.682) 0:03:07.002 ******** 2026-04-11 02:24:22.919402 | orchestrator | ok: [testbed-manager] 2026-04-11 02:24:22.919411 | orchestrator | 2026-04-11 02:24:22.919420 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-11 02:24:22.919428 | orchestrator | 2026-04-11 02:24:22.919437 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-11 02:24:22.919442 | orchestrator | Saturday 11 April 2026 02:23:29 +0000 (0:00:00.797) 0:03:07.799 ******** 2026-04-11 02:24:22.919447 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:24:22.919451 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:24:22.919456 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:24:22.919461 | orchestrator | 2026-04-11 02:24:22.919466 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-11 02:24:22.919470 | orchestrator | Saturday 11 April 2026 02:23:29 +0000 (0:00:00.326) 0:03:08.126 ******** 2026-04-11 02:24:22.919475 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:22.919480 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:24:22.919485 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:24:22.919490 | orchestrator | 2026-04-11 02:24:22.919494 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-11 02:24:22.919499 | orchestrator | Saturday 11 April 2026 02:23:30 +0000 (0:00:00.346) 0:03:08.472 ******** 2026-04-11 02:24:22.919504 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:24:22.919509 | orchestrator | 2026-04-11 02:24:22.919514 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-11 02:24:22.919518 | orchestrator | Saturday 11 April 2026 02:23:30 +0000 (0:00:00.816) 0:03:09.288 ******** 2026-04-11 02:24:22.919523 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 02:24:22.919528 | orchestrator | 2026-04-11 02:24:22.919533 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-11 02:24:22.919538 | orchestrator | Saturday 11 April 2026 02:23:31 +0000 (0:00:00.972) 0:03:10.261 ******** 2026-04-11 02:24:22.919542 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 02:24:22.919547 | orchestrator | 2026-04-11 02:24:22.919552 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-11 02:24:22.919562 | orchestrator | Saturday 11 April 2026 02:23:32 +0000 (0:00:00.935) 0:03:11.196 ******** 2026-04-11 02:24:22.919567 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:22.919572 | orchestrator | 2026-04-11 02:24:22.919577 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-11 02:24:22.919581 | orchestrator | Saturday 11 April 2026 02:23:32 +0000 (0:00:00.134) 0:03:11.331 ******** 2026-04-11 02:24:22.919586 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 02:24:22.919591 | orchestrator | 2026-04-11 02:24:22.919596 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-11 02:24:22.919601 | orchestrator | Saturday 11 April 2026 02:23:34 +0000 (0:00:01.068) 0:03:12.399 ******** 2026-04-11 02:24:22.919605 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:22.919610 | orchestrator | 2026-04-11 02:24:22.919615 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-11 02:24:22.919620 | orchestrator | Saturday 11 April 2026 02:23:34 +0000 (0:00:00.144) 0:03:12.544 ******** 2026-04-11 02:24:22.919624 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:22.919629 | orchestrator | 2026-04-11 02:24:22.919634 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-11 02:24:22.919639 | orchestrator | Saturday 11 April 2026 02:23:34 +0000 (0:00:00.125) 0:03:12.669 ******** 2026-04-11 02:24:22.919644 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:22.919648 | orchestrator | 2026-04-11 02:24:22.919653 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-11 02:24:22.919662 | orchestrator | Saturday 11 April 2026 02:23:34 +0000 (0:00:00.126) 0:03:12.796 ******** 2026-04-11 02:24:22.919666 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:22.919671 | orchestrator | 2026-04-11 02:24:22.919676 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-11 02:24:22.919681 | orchestrator | Saturday 11 April 2026 02:23:34 +0000 (0:00:00.123) 0:03:12.920 ******** 2026-04-11 02:24:22.919686 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 02:24:22.919691 | orchestrator | 2026-04-11 02:24:22.919696 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-11 02:24:22.919701 | orchestrator | Saturday 11 April 2026 02:23:40 +0000 (0:00:05.754) 0:03:18.674 ******** 2026-04-11 02:24:22.919705 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-11 02:24:22.919710 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-11 02:24:22.919720 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-11 02:24:47.732691 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-11 02:24:47.732851 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-11 02:24:47.732874 | orchestrator | 2026-04-11 02:24:47.732887 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-11 02:24:47.732899 | orchestrator | Saturday 11 April 2026 02:24:22 +0000 (0:00:42.575) 0:04:01.250 ******** 2026-04-11 02:24:47.732955 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 02:24:47.732970 | orchestrator | 2026-04-11 02:24:47.732981 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-11 02:24:47.732992 | orchestrator | Saturday 11 April 2026 02:24:24 +0000 (0:00:01.385) 0:04:02.635 ******** 2026-04-11 02:24:47.733004 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 02:24:47.733015 | orchestrator | 2026-04-11 02:24:47.733026 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-11 02:24:47.733037 | orchestrator | Saturday 11 April 2026 02:24:26 +0000 (0:00:01.939) 0:04:04.575 ******** 2026-04-11 02:24:47.733048 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 02:24:47.733058 | orchestrator | 2026-04-11 02:24:47.733069 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-11 02:24:47.733081 | orchestrator | Saturday 11 April 2026 02:24:27 +0000 (0:00:01.242) 0:04:05.818 ******** 2026-04-11 02:24:47.733123 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:47.733134 | orchestrator | 2026-04-11 02:24:47.733145 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-11 02:24:47.733156 | orchestrator | Saturday 11 April 2026 02:24:27 +0000 (0:00:00.137) 0:04:05.955 ******** 2026-04-11 02:24:47.733167 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-11 02:24:47.733179 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-11 02:24:47.733190 | orchestrator | 2026-04-11 02:24:47.733201 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-11 02:24:47.733213 | orchestrator | Saturday 11 April 2026 02:24:29 +0000 (0:00:01.970) 0:04:07.926 ******** 2026-04-11 02:24:47.733226 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:47.733238 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:24:47.733251 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:24:47.733263 | orchestrator | 2026-04-11 02:24:47.733275 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-11 02:24:47.733287 | orchestrator | Saturday 11 April 2026 02:24:29 +0000 (0:00:00.337) 0:04:08.263 ******** 2026-04-11 02:24:47.733300 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:24:47.733313 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:24:47.733325 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:24:47.733335 | orchestrator | 2026-04-11 02:24:47.733346 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-11 02:24:47.733357 | orchestrator | 2026-04-11 02:24:47.733368 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-11 02:24:47.733379 | orchestrator | Saturday 11 April 2026 02:24:30 +0000 (0:00:00.890) 0:04:09.153 ******** 2026-04-11 02:24:47.733389 | orchestrator | ok: [testbed-manager] 2026-04-11 02:24:47.733400 | orchestrator | 2026-04-11 02:24:47.733412 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-11 02:24:47.733423 | orchestrator | Saturday 11 April 2026 02:24:31 +0000 (0:00:00.375) 0:04:09.528 ******** 2026-04-11 02:24:47.733433 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 02:24:47.733444 | orchestrator | 2026-04-11 02:24:47.733455 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-11 02:24:47.733465 | orchestrator | Saturday 11 April 2026 02:24:31 +0000 (0:00:00.286) 0:04:09.815 ******** 2026-04-11 02:24:47.733476 | orchestrator | changed: [testbed-manager] 2026-04-11 02:24:47.733487 | orchestrator | 2026-04-11 02:24:47.733498 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-11 02:24:47.733509 | orchestrator | 2026-04-11 02:24:47.733520 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-11 02:24:47.733530 | orchestrator | Saturday 11 April 2026 02:24:37 +0000 (0:00:05.691) 0:04:15.506 ******** 2026-04-11 02:24:47.733541 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:24:47.733552 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:24:47.733567 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:24:47.733587 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:24:47.733614 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:24:47.733637 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:24:47.733653 | orchestrator | 2026-04-11 02:24:47.733672 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-11 02:24:47.733691 | orchestrator | Saturday 11 April 2026 02:24:37 +0000 (0:00:00.787) 0:04:16.293 ******** 2026-04-11 02:24:47.733709 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 02:24:47.733728 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 02:24:47.733745 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 02:24:47.733760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 02:24:47.733790 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 02:24:47.733807 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 02:24:47.733823 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 02:24:47.733842 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 02:24:47.733861 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 02:24:47.733905 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 02:24:47.733948 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 02:24:47.733967 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 02:24:47.733985 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 02:24:47.734002 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 02:24:47.734082 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 02:24:47.734252 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 02:24:47.734269 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 02:24:47.734280 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 02:24:47.734291 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 02:24:47.734302 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 02:24:47.734313 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 02:24:47.734324 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 02:24:47.734335 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 02:24:47.734345 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 02:24:47.734356 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 02:24:47.734367 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 02:24:47.734377 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 02:24:47.734388 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 02:24:47.734398 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 02:24:47.734409 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 02:24:47.734420 | orchestrator | 2026-04-11 02:24:47.734431 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-11 02:24:47.734442 | orchestrator | Saturday 11 April 2026 02:24:46 +0000 (0:00:08.457) 0:04:24.751 ******** 2026-04-11 02:24:47.734452 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:24:47.734463 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:24:47.734474 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:24:47.734485 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:47.734495 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:24:47.734506 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:24:47.734517 | orchestrator | 2026-04-11 02:24:47.734527 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-11 02:24:47.734538 | orchestrator | Saturday 11 April 2026 02:24:46 +0000 (0:00:00.573) 0:04:25.325 ******** 2026-04-11 02:24:47.734549 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:24:47.734571 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:24:47.734582 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:24:47.734592 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:24:47.734603 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:24:47.734614 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:24:47.734625 | orchestrator | 2026-04-11 02:24:47.734635 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:24:47.734647 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:24:47.734660 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-11 02:24:47.734671 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 02:24:47.734682 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 02:24:47.734693 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 02:24:47.734704 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 02:24:47.734715 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 02:24:47.734726 | orchestrator | 2026-04-11 02:24:47.734736 | orchestrator | 2026-04-11 02:24:47.734747 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:24:47.734758 | orchestrator | Saturday 11 April 2026 02:24:47 +0000 (0:00:00.728) 0:04:26.053 ******** 2026-04-11 02:24:47.734782 | orchestrator | =============================================================================== 2026-04-11 02:24:48.146554 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.07s 2026-04-11 02:24:48.146655 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.58s 2026-04-11 02:24:48.146669 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.39s 2026-04-11 02:24:48.146680 | orchestrator | kubectl : Install required packages ------------------------------------ 13.68s 2026-04-11 02:24:48.146689 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.93s 2026-04-11 02:24:48.146699 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.57s 2026-04-11 02:24:48.146709 | orchestrator | Manage labels ----------------------------------------------------------- 8.46s 2026-04-11 02:24:48.146718 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.96s 2026-04-11 02:24:48.146728 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.75s 2026-04-11 02:24:48.146737 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.69s 2026-04-11 02:24:48.146747 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.93s 2026-04-11 02:24:48.146758 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.77s 2026-04-11 02:24:48.146768 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.53s 2026-04-11 02:24:48.146778 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.97s 2026-04-11 02:24:48.146787 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.94s 2026-04-11 02:24:48.146797 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.91s 2026-04-11 02:24:48.146807 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.81s 2026-04-11 02:24:48.146847 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.76s 2026-04-11 02:24:48.146857 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.68s 2026-04-11 02:24:48.146868 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.68s 2026-04-11 02:24:48.493838 | orchestrator | + osism apply copy-kubeconfig 2026-04-11 02:25:00.825779 | orchestrator | 2026-04-11 02:25:00 | INFO  | Task a4dffc94-7e63-4897-a74a-38392dda1b6c (copy-kubeconfig) was prepared for execution. 2026-04-11 02:25:00.825947 | orchestrator | 2026-04-11 02:25:00 | INFO  | It takes a moment until task a4dffc94-7e63-4897-a74a-38392dda1b6c (copy-kubeconfig) has been started and output is visible here. 2026-04-11 02:25:08.384842 | orchestrator | 2026-04-11 02:25:08.385046 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-11 02:25:08.385068 | orchestrator | 2026-04-11 02:25:08.385082 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-11 02:25:08.385094 | orchestrator | Saturday 11 April 2026 02:25:05 +0000 (0:00:00.191) 0:00:00.191 ******** 2026-04-11 02:25:08.385107 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-11 02:25:08.385119 | orchestrator | 2026-04-11 02:25:08.385131 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-11 02:25:08.385165 | orchestrator | Saturday 11 April 2026 02:25:06 +0000 (0:00:00.753) 0:00:00.945 ******** 2026-04-11 02:25:08.385178 | orchestrator | changed: [testbed-manager] 2026-04-11 02:25:08.385190 | orchestrator | 2026-04-11 02:25:08.385203 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-11 02:25:08.385215 | orchestrator | Saturday 11 April 2026 02:25:07 +0000 (0:00:01.307) 0:00:02.252 ******** 2026-04-11 02:25:08.385231 | orchestrator | changed: [testbed-manager] 2026-04-11 02:25:08.385242 | orchestrator | 2026-04-11 02:25:08.385259 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:25:08.385271 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:25:08.385284 | orchestrator | 2026-04-11 02:25:08.385296 | orchestrator | 2026-04-11 02:25:08.385308 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:25:08.385320 | orchestrator | Saturday 11 April 2026 02:25:08 +0000 (0:00:00.571) 0:00:02.824 ******** 2026-04-11 02:25:08.385332 | orchestrator | =============================================================================== 2026-04-11 02:25:08.385343 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.31s 2026-04-11 02:25:08.385356 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2026-04-11 02:25:08.385367 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.57s 2026-04-11 02:25:08.741620 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-04-11 02:25:21.179821 | orchestrator | 2026-04-11 02:25:21 | INFO  | Task 3741a754-2ed5-405f-9f4b-1918c7560a0a (openstackclient) was prepared for execution. 2026-04-11 02:25:21.179999 | orchestrator | 2026-04-11 02:25:21 | INFO  | It takes a moment until task 3741a754-2ed5-405f-9f4b-1918c7560a0a (openstackclient) has been started and output is visible here. 2026-04-11 02:26:12.773480 | orchestrator | 2026-04-11 02:26:12.773609 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-11 02:26:12.773632 | orchestrator | 2026-04-11 02:26:12.773646 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-11 02:26:12.773660 | orchestrator | Saturday 11 April 2026 02:25:25 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-04-11 02:26:12.773674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-11 02:26:12.773689 | orchestrator | 2026-04-11 02:26:12.773737 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-11 02:26:12.773753 | orchestrator | Saturday 11 April 2026 02:25:26 +0000 (0:00:00.245) 0:00:00.490 ******** 2026-04-11 02:26:12.773766 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-11 02:26:12.773781 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-11 02:26:12.773796 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-11 02:26:12.773811 | orchestrator | 2026-04-11 02:26:12.773825 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-11 02:26:12.773839 | orchestrator | Saturday 11 April 2026 02:25:27 +0000 (0:00:01.358) 0:00:01.848 ******** 2026-04-11 02:26:12.773936 | orchestrator | changed: [testbed-manager] 2026-04-11 02:26:12.773954 | orchestrator | 2026-04-11 02:26:12.773971 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-11 02:26:12.773986 | orchestrator | Saturday 11 April 2026 02:25:29 +0000 (0:00:01.639) 0:00:03.488 ******** 2026-04-11 02:26:12.774002 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-11 02:26:12.774089 | orchestrator | ok: [testbed-manager] 2026-04-11 02:26:12.774115 | orchestrator | 2026-04-11 02:26:12.774132 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-11 02:26:12.774149 | orchestrator | Saturday 11 April 2026 02:26:06 +0000 (0:00:36.910) 0:00:40.398 ******** 2026-04-11 02:26:12.774170 | orchestrator | changed: [testbed-manager] 2026-04-11 02:26:12.774187 | orchestrator | 2026-04-11 02:26:12.774206 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-11 02:26:12.774223 | orchestrator | Saturday 11 April 2026 02:26:07 +0000 (0:00:00.980) 0:00:41.378 ******** 2026-04-11 02:26:12.774242 | orchestrator | ok: [testbed-manager] 2026-04-11 02:26:12.774262 | orchestrator | 2026-04-11 02:26:12.774282 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-11 02:26:12.774299 | orchestrator | Saturday 11 April 2026 02:26:07 +0000 (0:00:00.663) 0:00:42.042 ******** 2026-04-11 02:26:12.774318 | orchestrator | changed: [testbed-manager] 2026-04-11 02:26:12.774336 | orchestrator | 2026-04-11 02:26:12.774355 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-11 02:26:12.774372 | orchestrator | Saturday 11 April 2026 02:26:10 +0000 (0:00:02.642) 0:00:44.685 ******** 2026-04-11 02:26:12.774388 | orchestrator | changed: [testbed-manager] 2026-04-11 02:26:12.774404 | orchestrator | 2026-04-11 02:26:12.774419 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-11 02:26:12.774435 | orchestrator | Saturday 11 April 2026 02:26:11 +0000 (0:00:00.781) 0:00:45.467 ******** 2026-04-11 02:26:12.774450 | orchestrator | changed: [testbed-manager] 2026-04-11 02:26:12.774466 | orchestrator | 2026-04-11 02:26:12.774481 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-11 02:26:12.774496 | orchestrator | Saturday 11 April 2026 02:26:11 +0000 (0:00:00.611) 0:00:46.078 ******** 2026-04-11 02:26:12.774512 | orchestrator | ok: [testbed-manager] 2026-04-11 02:26:12.774528 | orchestrator | 2026-04-11 02:26:12.774543 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:26:12.774558 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:26:12.774576 | orchestrator | 2026-04-11 02:26:12.774591 | orchestrator | 2026-04-11 02:26:12.774607 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:26:12.774622 | orchestrator | Saturday 11 April 2026 02:26:12 +0000 (0:00:00.481) 0:00:46.560 ******** 2026-04-11 02:26:12.774637 | orchestrator | =============================================================================== 2026-04-11 02:26:12.774653 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.91s 2026-04-11 02:26:12.774669 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.64s 2026-04-11 02:26:12.774699 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.64s 2026-04-11 02:26:12.774715 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.36s 2026-04-11 02:26:12.774728 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.98s 2026-04-11 02:26:12.774742 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.78s 2026-04-11 02:26:12.774757 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.66s 2026-04-11 02:26:12.774772 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.61s 2026-04-11 02:26:12.774787 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.48s 2026-04-11 02:26:12.774803 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.25s 2026-04-11 02:26:15.439222 | orchestrator | 2026-04-11 02:26:15 | INFO  | Task 2d3cd55e-664a-4696-97de-def1ff3f3aa2 (common) was prepared for execution. 2026-04-11 02:26:15.439344 | orchestrator | 2026-04-11 02:26:15 | INFO  | It takes a moment until task 2d3cd55e-664a-4696-97de-def1ff3f3aa2 (common) has been started and output is visible here. 2026-04-11 02:26:28.646755 | orchestrator | 2026-04-11 02:26:28.646962 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-11 02:26:28.646996 | orchestrator | 2026-04-11 02:26:28.647023 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-11 02:26:28.647041 | orchestrator | Saturday 11 April 2026 02:26:20 +0000 (0:00:00.325) 0:00:00.325 ******** 2026-04-11 02:26:28.647061 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:26:28.647080 | orchestrator | 2026-04-11 02:26:28.647097 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-11 02:26:28.647116 | orchestrator | Saturday 11 April 2026 02:26:21 +0000 (0:00:01.430) 0:00:01.755 ******** 2026-04-11 02:26:28.647135 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 02:26:28.647153 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 02:26:28.647173 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 02:26:28.647193 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 02:26:28.647212 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 02:26:28.647228 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 02:26:28.647240 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 02:26:28.647251 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 02:26:28.647282 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 02:26:28.647295 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 02:26:28.647307 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 02:26:28.647321 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 02:26:28.647333 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 02:26:28.647346 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 02:26:28.647358 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 02:26:28.647370 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 02:26:28.647382 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 02:26:28.647420 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 02:26:28.647433 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 02:26:28.647445 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 02:26:28.647458 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 02:26:28.647470 | orchestrator | 2026-04-11 02:26:28.647483 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-11 02:26:28.647495 | orchestrator | Saturday 11 April 2026 02:26:24 +0000 (0:00:02.772) 0:00:04.527 ******** 2026-04-11 02:26:28.647508 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:26:28.647521 | orchestrator | 2026-04-11 02:26:28.647534 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-11 02:26:28.647551 | orchestrator | Saturday 11 April 2026 02:26:25 +0000 (0:00:01.483) 0:00:06.010 ******** 2026-04-11 02:26:28.647569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:28.647585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:28.647625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:28.647640 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:28.647653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:28.647664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:28.647684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:28.647696 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:28.647708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:28.647727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679324 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679332 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679410 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:29.679418 | orchestrator | 2026-04-11 02:26:29.679426 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-11 02:26:29.679435 | orchestrator | Saturday 11 April 2026 02:26:29 +0000 (0:00:03.539) 0:00:09.550 ******** 2026-04-11 02:26:29.679444 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:29.679451 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:29.679458 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:29.679466 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:26:29.679473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:29.679489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.321711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.321906 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:26:30.321997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:30.322074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.322092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.322104 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:26:30.322116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:30.322135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.322149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.322162 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:26:30.322199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:30.322226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.322240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.322254 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:26:30.322267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:30.322280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.322292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:30.322305 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:26:30.322319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:30.322341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438502 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:26:31.438523 | orchestrator | 2026-04-11 02:26:31.438537 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-11 02:26:31.438552 | orchestrator | Saturday 11 April 2026 02:26:30 +0000 (0:00:00.987) 0:00:10.537 ******** 2026-04-11 02:26:31.438566 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:31.438582 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438608 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:26:31.438640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:31.438668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438718 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:26:31.438755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:31.438770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438794 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:26:31.438805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:31.438817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:31.438923 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:26:31.438939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:31.438976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:37.012872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:37.012971 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:26:37.012985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:37.012997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:37.013007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:37.013015 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:26:37.013023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 02:26:37.013051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:37.013061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:37.013069 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:26:37.013077 | orchestrator | 2026-04-11 02:26:37.013086 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-11 02:26:37.013096 | orchestrator | Saturday 11 April 2026 02:26:32 +0000 (0:00:02.199) 0:00:12.737 ******** 2026-04-11 02:26:37.013104 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:26:37.013112 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:26:37.013120 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:26:37.013128 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:26:37.013149 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:26:37.013158 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:26:37.013166 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:26:37.013174 | orchestrator | 2026-04-11 02:26:37.013182 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-11 02:26:37.013190 | orchestrator | Saturday 11 April 2026 02:26:33 +0000 (0:00:00.794) 0:00:13.531 ******** 2026-04-11 02:26:37.013198 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:26:37.013206 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:26:37.013214 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:26:37.013222 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:26:37.013231 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:26:37.013239 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:26:37.013246 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:26:37.013254 | orchestrator | 2026-04-11 02:26:37.013262 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-11 02:26:37.013274 | orchestrator | Saturday 11 April 2026 02:26:34 +0000 (0:00:00.941) 0:00:14.473 ******** 2026-04-11 02:26:37.013290 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:37.013323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:37.013348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:37.013369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:37.013385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:37.013399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:37.013431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:39.860589 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.860689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.860728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.860755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.860768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.860780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.860905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.860937 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.860976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.861021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.861040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.861056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.861077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.861096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:39.861114 | orchestrator | 2026-04-11 02:26:39.861135 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-11 02:26:39.861157 | orchestrator | Saturday 11 April 2026 02:26:37 +0000 (0:00:03.548) 0:00:18.021 ******** 2026-04-11 02:26:39.861176 | orchestrator | [WARNING]: Skipped 2026-04-11 02:26:39.861198 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-11 02:26:39.861219 | orchestrator | to this access issue: 2026-04-11 02:26:39.861239 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-11 02:26:39.861260 | orchestrator | directory 2026-04-11 02:26:39.861280 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 02:26:39.861300 | orchestrator | 2026-04-11 02:26:39.861321 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-11 02:26:39.861340 | orchestrator | Saturday 11 April 2026 02:26:38 +0000 (0:00:01.037) 0:00:19.059 ******** 2026-04-11 02:26:39.861359 | orchestrator | [WARNING]: Skipped 2026-04-11 02:26:39.861385 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-11 02:26:50.275628 | orchestrator | to this access issue: 2026-04-11 02:26:50.275771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-11 02:26:50.275798 | orchestrator | directory 2026-04-11 02:26:50.275820 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 02:26:50.275940 | orchestrator | 2026-04-11 02:26:50.275962 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-11 02:26:50.275983 | orchestrator | Saturday 11 April 2026 02:26:40 +0000 (0:00:01.317) 0:00:20.376 ******** 2026-04-11 02:26:50.276036 | orchestrator | [WARNING]: Skipped 2026-04-11 02:26:50.276055 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-11 02:26:50.276074 | orchestrator | to this access issue: 2026-04-11 02:26:50.276095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-11 02:26:50.276114 | orchestrator | directory 2026-04-11 02:26:50.276133 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 02:26:50.276152 | orchestrator | 2026-04-11 02:26:50.276172 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-11 02:26:50.276190 | orchestrator | Saturday 11 April 2026 02:26:41 +0000 (0:00:00.915) 0:00:21.292 ******** 2026-04-11 02:26:50.276209 | orchestrator | [WARNING]: Skipped 2026-04-11 02:26:50.276228 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-11 02:26:50.276246 | orchestrator | to this access issue: 2026-04-11 02:26:50.276297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-11 02:26:50.276319 | orchestrator | directory 2026-04-11 02:26:50.276337 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 02:26:50.276356 | orchestrator | 2026-04-11 02:26:50.276374 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-11 02:26:50.276393 | orchestrator | Saturday 11 April 2026 02:26:41 +0000 (0:00:00.881) 0:00:22.173 ******** 2026-04-11 02:26:50.276413 | orchestrator | changed: [testbed-manager] 2026-04-11 02:26:50.276433 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:26:50.276451 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:26:50.276469 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:26:50.276487 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:26:50.276505 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:26:50.276544 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:26:50.276563 | orchestrator | 2026-04-11 02:26:50.276581 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-11 02:26:50.276597 | orchestrator | Saturday 11 April 2026 02:26:44 +0000 (0:00:02.648) 0:00:24.822 ******** 2026-04-11 02:26:50.276613 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 02:26:50.276632 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 02:26:50.276649 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 02:26:50.276667 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 02:26:50.276684 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 02:26:50.276702 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 02:26:50.276727 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 02:26:50.276745 | orchestrator | 2026-04-11 02:26:50.276763 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-11 02:26:50.276781 | orchestrator | Saturday 11 April 2026 02:26:46 +0000 (0:00:02.206) 0:00:27.028 ******** 2026-04-11 02:26:50.276800 | orchestrator | changed: [testbed-manager] 2026-04-11 02:26:50.276817 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:26:50.276857 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:26:50.276874 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:26:50.276891 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:26:50.276908 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:26:50.276925 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:26:50.276944 | orchestrator | 2026-04-11 02:26:50.276962 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-11 02:26:50.276998 | orchestrator | Saturday 11 April 2026 02:26:48 +0000 (0:00:02.049) 0:00:29.078 ******** 2026-04-11 02:26:50.277022 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:50.277144 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:50.277169 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:50.277189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:50.277209 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:50.277228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:50.277253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:50.277275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:50.277298 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:50.277322 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:56.421774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:56.421907 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:56.421920 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:56.421941 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:56.421949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:56.421972 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:56.421980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:26:56.422000 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:56.422008 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:56.422054 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:56.422064 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:56.422072 | orchestrator | 2026-04-11 02:26:56.422080 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-11 02:26:56.422089 | orchestrator | Saturday 11 April 2026 02:26:50 +0000 (0:00:01.597) 0:00:30.675 ******** 2026-04-11 02:26:56.422095 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 02:26:56.422102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 02:26:56.422116 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 02:26:56.422123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 02:26:56.422130 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 02:26:56.422136 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 02:26:56.422143 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 02:26:56.422150 | orchestrator | 2026-04-11 02:26:56.422157 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-11 02:26:56.422163 | orchestrator | Saturday 11 April 2026 02:26:52 +0000 (0:00:02.013) 0:00:32.689 ******** 2026-04-11 02:26:56.422170 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 02:26:56.422178 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 02:26:56.422184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 02:26:56.422196 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 02:26:56.422203 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 02:26:56.422210 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 02:26:56.422216 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 02:26:56.422223 | orchestrator | 2026-04-11 02:26:56.422229 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-11 02:26:56.422236 | orchestrator | Saturday 11 April 2026 02:26:54 +0000 (0:00:01.833) 0:00:34.522 ******** 2026-04-11 02:26:56.422256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:56.422269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:57.072255 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:57.072360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:57.072401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:57.072431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:57.072443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 02:26:57.072483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072515 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072566 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:26:57.072624 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:28:27.986283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:28:27.986425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:28:27.986440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:28:27.986460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:28:27.986468 | orchestrator | 2026-04-11 02:28:27.986476 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-11 02:28:27.986484 | orchestrator | Saturday 11 April 2026 02:26:57 +0000 (0:00:02.761) 0:00:37.284 ******** 2026-04-11 02:28:27.986491 | orchestrator | changed: [testbed-manager] 2026-04-11 02:28:27.986499 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:28:27.986506 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:28:27.986512 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:28:27.986519 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:28:27.986526 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:28:27.986533 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:28:27.986539 | orchestrator | 2026-04-11 02:28:27.986546 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-11 02:28:27.986553 | orchestrator | Saturday 11 April 2026 02:26:58 +0000 (0:00:01.434) 0:00:38.719 ******** 2026-04-11 02:28:27.986560 | orchestrator | changed: [testbed-manager] 2026-04-11 02:28:27.986566 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:28:27.986577 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:28:27.986591 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:28:27.986608 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:28:27.986619 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:28:27.986629 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:28:27.986639 | orchestrator | 2026-04-11 02:28:27.986650 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 02:28:27.986660 | orchestrator | Saturday 11 April 2026 02:26:59 +0000 (0:00:01.097) 0:00:39.816 ******** 2026-04-11 02:28:27.986671 | orchestrator | 2026-04-11 02:28:27.986681 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 02:28:27.986691 | orchestrator | Saturday 11 April 2026 02:26:59 +0000 (0:00:00.067) 0:00:39.883 ******** 2026-04-11 02:28:27.986701 | orchestrator | 2026-04-11 02:28:27.986711 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 02:28:27.986722 | orchestrator | Saturday 11 April 2026 02:26:59 +0000 (0:00:00.073) 0:00:39.957 ******** 2026-04-11 02:28:27.986732 | orchestrator | 2026-04-11 02:28:27.986743 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 02:28:27.986753 | orchestrator | Saturday 11 April 2026 02:26:59 +0000 (0:00:00.078) 0:00:40.036 ******** 2026-04-11 02:28:27.986764 | orchestrator | 2026-04-11 02:28:27.986774 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 02:28:27.986824 | orchestrator | Saturday 11 April 2026 02:27:00 +0000 (0:00:00.239) 0:00:40.275 ******** 2026-04-11 02:28:27.986837 | orchestrator | 2026-04-11 02:28:27.986848 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 02:28:27.986860 | orchestrator | Saturday 11 April 2026 02:27:00 +0000 (0:00:00.062) 0:00:40.338 ******** 2026-04-11 02:28:27.986872 | orchestrator | 2026-04-11 02:28:27.986897 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 02:28:27.986920 | orchestrator | Saturday 11 April 2026 02:27:00 +0000 (0:00:00.085) 0:00:40.424 ******** 2026-04-11 02:28:27.986932 | orchestrator | 2026-04-11 02:28:27.986944 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-11 02:28:27.986954 | orchestrator | Saturday 11 April 2026 02:27:00 +0000 (0:00:00.097) 0:00:40.521 ******** 2026-04-11 02:28:27.986962 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:28:27.986970 | orchestrator | changed: [testbed-manager] 2026-04-11 02:28:27.986976 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:28:27.986983 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:28:27.986990 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:28:27.987015 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:28:27.987022 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:28:27.987029 | orchestrator | 2026-04-11 02:28:27.987036 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-11 02:28:27.987042 | orchestrator | Saturday 11 April 2026 02:27:41 +0000 (0:00:41.154) 0:01:21.675 ******** 2026-04-11 02:28:27.987049 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:28:27.987056 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:28:27.987062 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:28:27.987069 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:28:27.987076 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:28:27.987082 | orchestrator | changed: [testbed-manager] 2026-04-11 02:28:27.987089 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:28:27.987095 | orchestrator | 2026-04-11 02:28:27.987102 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-11 02:28:27.987108 | orchestrator | Saturday 11 April 2026 02:28:17 +0000 (0:00:35.740) 0:01:57.416 ******** 2026-04-11 02:28:27.987115 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:28:27.987123 | orchestrator | ok: [testbed-manager] 2026-04-11 02:28:27.987130 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:28:27.987136 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:28:27.987143 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:28:27.987149 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:28:27.987156 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:28:27.987162 | orchestrator | 2026-04-11 02:28:27.987169 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-11 02:28:27.987175 | orchestrator | Saturday 11 April 2026 02:28:19 +0000 (0:00:01.965) 0:01:59.382 ******** 2026-04-11 02:28:27.987182 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:28:27.987189 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:28:27.987195 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:28:27.987202 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:28:27.987209 | orchestrator | changed: [testbed-manager] 2026-04-11 02:28:27.987215 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:28:27.987222 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:28:27.987228 | orchestrator | 2026-04-11 02:28:27.987235 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:28:27.987243 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 02:28:27.987252 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 02:28:27.987268 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 02:28:27.987281 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 02:28:27.987288 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 02:28:27.987295 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 02:28:27.987301 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 02:28:27.987308 | orchestrator | 2026-04-11 02:28:27.987315 | orchestrator | 2026-04-11 02:28:27.987322 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:28:27.987328 | orchestrator | Saturday 11 April 2026 02:28:27 +0000 (0:00:08.789) 0:02:08.171 ******** 2026-04-11 02:28:27.987335 | orchestrator | =============================================================================== 2026-04-11 02:28:27.987342 | orchestrator | common : Restart fluentd container ------------------------------------- 41.15s 2026-04-11 02:28:27.987349 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.74s 2026-04-11 02:28:27.987355 | orchestrator | common : Restart cron container ----------------------------------------- 8.79s 2026-04-11 02:28:27.987362 | orchestrator | common : Copying over config.json files for services -------------------- 3.55s 2026-04-11 02:28:27.987368 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.54s 2026-04-11 02:28:27.987375 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.77s 2026-04-11 02:28:27.987382 | orchestrator | common : Check common containers ---------------------------------------- 2.76s 2026-04-11 02:28:27.987388 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.65s 2026-04-11 02:28:27.987395 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.21s 2026-04-11 02:28:27.987402 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.20s 2026-04-11 02:28:27.987408 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.05s 2026-04-11 02:28:27.987415 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.01s 2026-04-11 02:28:27.987421 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.97s 2026-04-11 02:28:27.987428 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.83s 2026-04-11 02:28:27.987434 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.60s 2026-04-11 02:28:27.987441 | orchestrator | common : include_tasks -------------------------------------------------- 1.48s 2026-04-11 02:28:27.987452 | orchestrator | common : Creating log volume -------------------------------------------- 1.43s 2026-04-11 02:28:28.509146 | orchestrator | common : include_tasks -------------------------------------------------- 1.43s 2026-04-11 02:28:28.509283 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.32s 2026-04-11 02:28:28.509313 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.10s 2026-04-11 02:28:31.321827 | orchestrator | 2026-04-11 02:28:31 | INFO  | Task 66a7732c-e312-4a3c-84aa-b36b18f7e9dd (loadbalancer) was prepared for execution. 2026-04-11 02:28:31.321935 | orchestrator | 2026-04-11 02:28:31 | INFO  | It takes a moment until task 66a7732c-e312-4a3c-84aa-b36b18f7e9dd (loadbalancer) has been started and output is visible here. 2026-04-11 02:28:46.125628 | orchestrator | 2026-04-11 02:28:46.125765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:28:46.125822 | orchestrator | 2026-04-11 02:28:46.125836 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 02:28:46.125848 | orchestrator | Saturday 11 April 2026 02:28:35 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-04-11 02:28:46.125880 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:28:46.125893 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:28:46.125904 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:28:46.125915 | orchestrator | 2026-04-11 02:28:46.125927 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 02:28:46.125938 | orchestrator | Saturday 11 April 2026 02:28:36 +0000 (0:00:00.335) 0:00:00.590 ******** 2026-04-11 02:28:46.125950 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-11 02:28:46.125961 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-11 02:28:46.125972 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-11 02:28:46.125983 | orchestrator | 2026-04-11 02:28:46.125994 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-11 02:28:46.126005 | orchestrator | 2026-04-11 02:28:46.126104 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-11 02:28:46.126120 | orchestrator | Saturday 11 April 2026 02:28:36 +0000 (0:00:00.496) 0:00:01.087 ******** 2026-04-11 02:28:46.126140 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:28:46.126152 | orchestrator | 2026-04-11 02:28:46.126164 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-11 02:28:46.126176 | orchestrator | Saturday 11 April 2026 02:28:37 +0000 (0:00:00.600) 0:00:01.688 ******** 2026-04-11 02:28:46.126189 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:28:46.126202 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:28:46.126214 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:28:46.126227 | orchestrator | 2026-04-11 02:28:46.126240 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-11 02:28:46.126252 | orchestrator | Saturday 11 April 2026 02:28:38 +0000 (0:00:00.635) 0:00:02.323 ******** 2026-04-11 02:28:46.126265 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:28:46.126278 | orchestrator | 2026-04-11 02:28:46.126290 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-11 02:28:46.126303 | orchestrator | Saturday 11 April 2026 02:28:38 +0000 (0:00:00.755) 0:00:03.079 ******** 2026-04-11 02:28:46.126315 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:28:46.126327 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:28:46.126340 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:28:46.126353 | orchestrator | 2026-04-11 02:28:46.126365 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-11 02:28:46.126377 | orchestrator | Saturday 11 April 2026 02:28:39 +0000 (0:00:00.604) 0:00:03.684 ******** 2026-04-11 02:28:46.126390 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 02:28:46.126403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 02:28:46.126415 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 02:28:46.126428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 02:28:46.126440 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 02:28:46.126452 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 02:28:46.126465 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 02:28:46.126479 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 02:28:46.126491 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 02:28:46.126504 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 02:28:46.126524 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 02:28:46.126535 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 02:28:46.126546 | orchestrator | 2026-04-11 02:28:46.126557 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 02:28:46.126568 | orchestrator | Saturday 11 April 2026 02:28:41 +0000 (0:00:02.175) 0:00:05.860 ******** 2026-04-11 02:28:46.126579 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-11 02:28:46.126591 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-11 02:28:46.126602 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-11 02:28:46.126613 | orchestrator | 2026-04-11 02:28:46.126625 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 02:28:46.126644 | orchestrator | Saturday 11 April 2026 02:28:42 +0000 (0:00:00.778) 0:00:06.638 ******** 2026-04-11 02:28:46.126662 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-11 02:28:46.126683 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-11 02:28:46.126701 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-11 02:28:46.126719 | orchestrator | 2026-04-11 02:28:46.126736 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 02:28:46.126755 | orchestrator | Saturday 11 April 2026 02:28:43 +0000 (0:00:01.316) 0:00:07.955 ******** 2026-04-11 02:28:46.126775 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-11 02:28:46.126833 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:28:46.126878 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-11 02:28:46.126892 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:28:46.126903 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-11 02:28:46.126914 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:28:46.126980 | orchestrator | 2026-04-11 02:28:46.126992 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-11 02:28:46.127003 | orchestrator | Saturday 11 April 2026 02:28:44 +0000 (0:00:00.598) 0:00:08.554 ******** 2026-04-11 02:28:46.127017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 02:28:46.127043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 02:28:46.127055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 02:28:46.127076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:28:46.127087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:28:46.127108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:28:51.492739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:28:51.492862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:28:51.492876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:28:51.492886 | orchestrator | 2026-04-11 02:28:51.492896 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-11 02:28:51.492906 | orchestrator | Saturday 11 April 2026 02:28:46 +0000 (0:00:01.833) 0:00:10.387 ******** 2026-04-11 02:28:51.492914 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:28:51.492942 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:28:51.492950 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:28:51.492959 | orchestrator | 2026-04-11 02:28:51.492967 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-11 02:28:51.492975 | orchestrator | Saturday 11 April 2026 02:28:46 +0000 (0:00:00.876) 0:00:11.263 ******** 2026-04-11 02:28:51.492984 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-11 02:28:51.492992 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-11 02:28:51.493000 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-11 02:28:51.493008 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-11 02:28:51.493016 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-11 02:28:51.493024 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-11 02:28:51.493032 | orchestrator | 2026-04-11 02:28:51.493039 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-11 02:28:51.493047 | orchestrator | Saturday 11 April 2026 02:28:48 +0000 (0:00:01.556) 0:00:12.820 ******** 2026-04-11 02:28:51.493055 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:28:51.493063 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:28:51.493071 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:28:51.493079 | orchestrator | 2026-04-11 02:28:51.493087 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-11 02:28:51.493095 | orchestrator | Saturday 11 April 2026 02:28:49 +0000 (0:00:00.884) 0:00:13.705 ******** 2026-04-11 02:28:51.493103 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:28:51.493111 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:28:51.493118 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:28:51.493126 | orchestrator | 2026-04-11 02:28:51.493134 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-11 02:28:51.493142 | orchestrator | Saturday 11 April 2026 02:28:50 +0000 (0:00:01.427) 0:00:15.132 ******** 2026-04-11 02:28:51.493151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:28:51.493175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:28:51.493185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:28:51.493195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 02:28:51.493210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:28:51.493219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:28:51.493260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:28:51.493270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:28:51.493280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 02:28:51.493290 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:28:51.493315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:28:54.332738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:28:54.333674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:28:54.333713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 02:28:54.333723 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:28:54.333733 | orchestrator | 2026-04-11 02:28:54.333743 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-11 02:28:54.333752 | orchestrator | Saturday 11 April 2026 02:28:51 +0000 (0:00:00.626) 0:00:15.758 ******** 2026-04-11 02:28:54.333761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 02:28:54.333771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 02:28:54.333814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 02:28:54.333868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:28:54.333879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:28:54.333888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:28:54.333896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 02:28:54.333904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:28:54.333913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 02:28:54.333946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:02.918907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:02.919019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4', '__omit_place_holder__3f285da1611979d9ba9d740fbb3b596a9cbe93e4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 02:29:02.919033 | orchestrator | 2026-04-11 02:29:02.919044 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-11 02:29:02.919054 | orchestrator | Saturday 11 April 2026 02:28:54 +0000 (0:00:02.844) 0:00:18.603 ******** 2026-04-11 02:29:02.919062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:02.919073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:02.919081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:02.919112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:02.919152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:02.919162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:02.919170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:02.919179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:02.919187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:02.919195 | orchestrator | 2026-04-11 02:29:02.919204 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-11 02:29:02.919212 | orchestrator | Saturday 11 April 2026 02:28:57 +0000 (0:00:03.159) 0:00:21.762 ******** 2026-04-11 02:29:02.919227 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 02:29:02.919236 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 02:29:02.919244 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 02:29:02.919252 | orchestrator | 2026-04-11 02:29:02.919259 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-11 02:29:02.919267 | orchestrator | Saturday 11 April 2026 02:28:59 +0000 (0:00:01.901) 0:00:23.663 ******** 2026-04-11 02:29:02.919275 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 02:29:02.919283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 02:29:02.919291 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 02:29:02.919299 | orchestrator | 2026-04-11 02:29:02.919307 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-11 02:29:02.919314 | orchestrator | Saturday 11 April 2026 02:29:02 +0000 (0:00:02.924) 0:00:26.588 ******** 2026-04-11 02:29:02.919322 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:02.919332 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:02.919339 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:02.919348 | orchestrator | 2026-04-11 02:29:02.919362 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-11 02:29:14.937614 | orchestrator | Saturday 11 April 2026 02:29:02 +0000 (0:00:00.603) 0:00:27.192 ******** 2026-04-11 02:29:14.937740 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 02:29:14.937886 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 02:29:14.937939 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 02:29:14.937956 | orchestrator | 2026-04-11 02:29:14.937973 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-11 02:29:14.937990 | orchestrator | Saturday 11 April 2026 02:29:05 +0000 (0:00:02.163) 0:00:29.355 ******** 2026-04-11 02:29:14.938007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 02:29:14.938094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 02:29:14.938114 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 02:29:14.938132 | orchestrator | 2026-04-11 02:29:14.938150 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-11 02:29:14.938166 | orchestrator | Saturday 11 April 2026 02:29:07 +0000 (0:00:02.171) 0:00:31.527 ******** 2026-04-11 02:29:14.938184 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-11 02:29:14.938202 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-11 02:29:14.938219 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-11 02:29:14.938234 | orchestrator | 2026-04-11 02:29:14.938271 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-11 02:29:14.938289 | orchestrator | Saturday 11 April 2026 02:29:08 +0000 (0:00:01.504) 0:00:33.031 ******** 2026-04-11 02:29:14.938307 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-11 02:29:14.938324 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-11 02:29:14.938341 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-11 02:29:14.938359 | orchestrator | 2026-04-11 02:29:14.938407 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-11 02:29:14.938426 | orchestrator | Saturday 11 April 2026 02:29:10 +0000 (0:00:01.598) 0:00:34.630 ******** 2026-04-11 02:29:14.938445 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:29:14.938462 | orchestrator | 2026-04-11 02:29:14.938478 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-11 02:29:14.938490 | orchestrator | Saturday 11 April 2026 02:29:10 +0000 (0:00:00.570) 0:00:35.200 ******** 2026-04-11 02:29:14.938503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:14.938518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:14.938535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:14.938571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:14.938582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:14.938592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:14.938613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:14.938624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:14.938635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:14.938645 | orchestrator | 2026-04-11 02:29:14.938655 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-11 02:29:14.938665 | orchestrator | Saturday 11 April 2026 02:29:14 +0000 (0:00:03.354) 0:00:38.554 ******** 2026-04-11 02:29:14.938689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:29:15.779112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:15.779199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:15.779232 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:15.779243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:29:15.779251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:15.779259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:15.779266 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:15.779274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:29:15.779309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:15.779317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:15.779331 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:15.779339 | orchestrator | 2026-04-11 02:29:15.779347 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-11 02:29:15.779355 | orchestrator | Saturday 11 April 2026 02:29:14 +0000 (0:00:00.652) 0:00:39.207 ******** 2026-04-11 02:29:15.779364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:29:15.779372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:15.779379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:15.779387 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:15.779394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:29:15.779410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:16.653697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:16.653881 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:16.653903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:29:16.653917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:16.653929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:16.653940 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:16.653952 | orchestrator | 2026-04-11 02:29:16.653964 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-11 02:29:16.653976 | orchestrator | Saturday 11 April 2026 02:29:15 +0000 (0:00:00.837) 0:00:40.045 ******** 2026-04-11 02:29:16.653988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:29:16.654000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:16.654183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:16.654220 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:16.654241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:29:16.654262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:16.654283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:16.654301 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:16.654319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:29:16.654360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:16.654390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:16.654434 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:18.087182 | orchestrator | 2026-04-11 02:29:18.087282 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-11 02:29:18.087296 | orchestrator | Saturday 11 April 2026 02:29:16 +0000 (0:00:00.870) 0:00:40.915 ******** 2026-04-11 02:29:18.087311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:29:18.087326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:18.087337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:18.087347 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:18.087359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:29:18.087384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:18.087412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:18.087443 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:18.087471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:29:18.087483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:18.087493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:18.087503 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:18.087513 | orchestrator | 2026-04-11 02:29:18.087523 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-11 02:29:18.087533 | orchestrator | Saturday 11 April 2026 02:29:17 +0000 (0:00:00.588) 0:00:41.504 ******** 2026-04-11 02:29:18.087543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:29:18.087554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:18.087575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:18.087585 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:18.087609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:29:19.195051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:19.196209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:19.196278 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:19.196294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:29:19.196307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:19.196318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:19.196353 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:19.196364 | orchestrator | 2026-04-11 02:29:19.196375 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-11 02:29:19.196386 | orchestrator | Saturday 11 April 2026 02:29:18 +0000 (0:00:00.853) 0:00:42.357 ******** 2026-04-11 02:29:19.196410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:29:19.196448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:19.196460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:19.196470 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:19.196480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:29:19.196490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:19.196508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:19.196518 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:19.196533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:29:19.196550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:20.649541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:20.649629 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:20.649642 | orchestrator | 2026-04-11 02:29:20.649650 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-11 02:29:20.649659 | orchestrator | Saturday 11 April 2026 02:29:19 +0000 (0:00:01.104) 0:00:43.462 ******** 2026-04-11 02:29:20.649668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:29:20.649678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:20.649704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:20.649713 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:20.649721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:29:20.649741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:20.649763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:20.649860 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:20.649872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:29:20.649880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:20.649894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:20.649902 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:20.649909 | orchestrator | 2026-04-11 02:29:20.649917 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-11 02:29:20.649925 | orchestrator | Saturday 11 April 2026 02:29:19 +0000 (0:00:00.603) 0:00:44.066 ******** 2026-04-11 02:29:20.649933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 02:29:20.649941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:20.649962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:27.322416 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:27.322595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 02:29:27.322634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:27.322694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:27.322720 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:27.322741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 02:29:27.322762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 02:29:27.322947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 02:29:27.322971 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:27.322991 | orchestrator | 2026-04-11 02:29:27.323014 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-11 02:29:27.323036 | orchestrator | Saturday 11 April 2026 02:29:20 +0000 (0:00:00.850) 0:00:44.916 ******** 2026-04-11 02:29:27.323056 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 02:29:27.323104 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 02:29:27.323126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 02:29:27.323146 | orchestrator | 2026-04-11 02:29:27.323164 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-11 02:29:27.323184 | orchestrator | Saturday 11 April 2026 02:29:22 +0000 (0:00:01.749) 0:00:46.666 ******** 2026-04-11 02:29:27.323203 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 02:29:27.323224 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 02:29:27.323243 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 02:29:27.323263 | orchestrator | 2026-04-11 02:29:27.323300 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-11 02:29:27.323321 | orchestrator | Saturday 11 April 2026 02:29:24 +0000 (0:00:01.724) 0:00:48.391 ******** 2026-04-11 02:29:27.323342 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 02:29:27.323361 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 02:29:27.323380 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 02:29:27.323398 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 02:29:27.323417 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:27.323435 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 02:29:27.323454 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:27.323473 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 02:29:27.323492 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:27.323511 | orchestrator | 2026-04-11 02:29:27.323530 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-11 02:29:27.323548 | orchestrator | Saturday 11 April 2026 02:29:24 +0000 (0:00:00.822) 0:00:49.213 ******** 2026-04-11 02:29:27.323569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:27.323590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:27.323619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 02:29:27.323657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:31.920969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:31.921065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 02:29:31.921079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:31.921089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:31.921099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 02:29:31.921109 | orchestrator | 2026-04-11 02:29:31.921119 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-11 02:29:31.921146 | orchestrator | Saturday 11 April 2026 02:29:27 +0000 (0:00:02.379) 0:00:51.593 ******** 2026-04-11 02:29:31.921156 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:29:31.921164 | orchestrator | 2026-04-11 02:29:31.921173 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-11 02:29:31.921182 | orchestrator | Saturday 11 April 2026 02:29:28 +0000 (0:00:00.931) 0:00:52.524 ******** 2026-04-11 02:29:31.921209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 02:29:31.921240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 02:29:31.921250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:31.921260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 02:29:31.921269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 02:29:31.921283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 02:29:31.921292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:31.921315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 02:29:32.653010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 02:29:32.653106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 02:29:32.653119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:32.653142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 02:29:32.653151 | orchestrator | 2026-04-11 02:29:32.653160 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-11 02:29:32.653168 | orchestrator | Saturday 11 April 2026 02:29:31 +0000 (0:00:03.657) 0:00:56.182 ******** 2026-04-11 02:29:32.653177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 02:29:32.653217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 02:29:32.653227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:32.653234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 02:29:32.653241 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:32.653249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 02:29:32.653260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 02:29:32.653272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:32.653279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 02:29:32.653286 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:32.653300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 02:29:41.651276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 02:29:41.651371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:41.651381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 02:29:41.651408 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:41.651419 | orchestrator | 2026-04-11 02:29:41.651429 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-11 02:29:41.651440 | orchestrator | Saturday 11 April 2026 02:29:32 +0000 (0:00:00.741) 0:00:56.924 ******** 2026-04-11 02:29:41.651450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-11 02:29:41.651462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-11 02:29:41.651472 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:41.651497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-11 02:29:41.651507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-11 02:29:41.651517 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:41.651526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-11 02:29:41.651536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-11 02:29:41.651544 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:41.651549 | orchestrator | 2026-04-11 02:29:41.651555 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-11 02:29:41.651560 | orchestrator | Saturday 11 April 2026 02:29:33 +0000 (0:00:01.216) 0:00:58.140 ******** 2026-04-11 02:29:41.651566 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:29:41.651571 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:29:41.651577 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:29:41.651582 | orchestrator | 2026-04-11 02:29:41.651588 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-11 02:29:41.651594 | orchestrator | Saturday 11 April 2026 02:29:35 +0000 (0:00:01.343) 0:00:59.484 ******** 2026-04-11 02:29:41.651599 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:29:41.651605 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:29:41.651610 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:29:41.651615 | orchestrator | 2026-04-11 02:29:41.651621 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-11 02:29:41.651626 | orchestrator | Saturday 11 April 2026 02:29:37 +0000 (0:00:02.116) 0:01:01.600 ******** 2026-04-11 02:29:41.651631 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:29:41.651637 | orchestrator | 2026-04-11 02:29:41.651656 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-11 02:29:41.651662 | orchestrator | Saturday 11 April 2026 02:29:38 +0000 (0:00:00.702) 0:01:02.303 ******** 2026-04-11 02:29:41.651670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 02:29:41.651686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:41.651697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:29:41.651703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 02:29:41.651708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:41.651720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:29:42.280159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 02:29:42.281401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:42.281469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:29:42.281481 | orchestrator | 2026-04-11 02:29:42.281492 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-11 02:29:42.281501 | orchestrator | Saturday 11 April 2026 02:29:41 +0000 (0:00:03.617) 0:01:05.920 ******** 2026-04-11 02:29:42.281511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 02:29:42.281523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:42.281590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:29:42.281607 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:42.281630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 02:29:42.281644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:42.281657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:29:42.281665 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:42.281673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 02:29:42.281697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 02:29:52.498423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:29:52.499004 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:52.499067 | orchestrator | 2026-04-11 02:29:52.499075 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-11 02:29:52.499082 | orchestrator | Saturday 11 April 2026 02:29:42 +0000 (0:00:00.627) 0:01:06.548 ******** 2026-04-11 02:29:52.499101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-11 02:29:52.499109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-11 02:29:52.499117 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:52.499123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-11 02:29:52.499128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-11 02:29:52.499134 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:52.499140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-11 02:29:52.499145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-11 02:29:52.499150 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:52.499156 | orchestrator | 2026-04-11 02:29:52.499163 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-11 02:29:52.499171 | orchestrator | Saturday 11 April 2026 02:29:43 +0000 (0:00:00.899) 0:01:07.447 ******** 2026-04-11 02:29:52.499179 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:29:52.499188 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:29:52.499196 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:29:52.499205 | orchestrator | 2026-04-11 02:29:52.499214 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-11 02:29:52.499222 | orchestrator | Saturday 11 April 2026 02:29:44 +0000 (0:00:01.589) 0:01:09.037 ******** 2026-04-11 02:29:52.499249 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:29:52.499256 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:29:52.499261 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:29:52.499267 | orchestrator | 2026-04-11 02:29:52.499273 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-11 02:29:52.499278 | orchestrator | Saturday 11 April 2026 02:29:46 +0000 (0:00:02.082) 0:01:11.120 ******** 2026-04-11 02:29:52.499283 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:52.499287 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:52.499292 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:29:52.499297 | orchestrator | 2026-04-11 02:29:52.499301 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-11 02:29:52.499306 | orchestrator | Saturday 11 April 2026 02:29:47 +0000 (0:00:00.349) 0:01:11.469 ******** 2026-04-11 02:29:52.499310 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:29:52.499315 | orchestrator | 2026-04-11 02:29:52.499320 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-11 02:29:52.499324 | orchestrator | Saturday 11 April 2026 02:29:47 +0000 (0:00:00.750) 0:01:12.220 ******** 2026-04-11 02:29:52.499347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 02:29:52.499357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 02:29:52.499363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 02:29:52.499367 | orchestrator | 2026-04-11 02:29:52.499372 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-11 02:29:52.499378 | orchestrator | Saturday 11 April 2026 02:29:50 +0000 (0:00:03.038) 0:01:15.258 ******** 2026-04-11 02:29:52.499388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 02:29:52.499393 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:29:52.499398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 02:29:52.499402 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:29:52.499411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 02:30:00.684585 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:00.684694 | orchestrator | 2026-04-11 02:30:00.684711 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-11 02:30:00.684725 | orchestrator | Saturday 11 April 2026 02:29:52 +0000 (0:00:01.504) 0:01:16.763 ******** 2026-04-11 02:30:00.684756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-11 02:30:00.684842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-11 02:30:00.684858 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:00.684895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-11 02:30:00.684908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-11 02:30:00.684920 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:00.684932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-11 02:30:00.684943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-11 02:30:00.684961 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:00.684980 | orchestrator | 2026-04-11 02:30:00.684998 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-11 02:30:00.685017 | orchestrator | Saturday 11 April 2026 02:29:54 +0000 (0:00:01.819) 0:01:18.583 ******** 2026-04-11 02:30:00.685034 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:00.685054 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:00.685072 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:00.685088 | orchestrator | 2026-04-11 02:30:00.685111 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-11 02:30:00.685131 | orchestrator | Saturday 11 April 2026 02:29:54 +0000 (0:00:00.458) 0:01:19.041 ******** 2026-04-11 02:30:00.685149 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:00.685169 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:00.685189 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:00.685209 | orchestrator | 2026-04-11 02:30:00.685228 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-11 02:30:00.685246 | orchestrator | Saturday 11 April 2026 02:29:56 +0000 (0:00:01.423) 0:01:20.465 ******** 2026-04-11 02:30:00.685260 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:30:00.685273 | orchestrator | 2026-04-11 02:30:00.685286 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-11 02:30:00.685299 | orchestrator | Saturday 11 April 2026 02:29:57 +0000 (0:00:00.977) 0:01:21.442 ******** 2026-04-11 02:30:00.685346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 02:30:00.685378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:30:00.685394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 02:30:00.685408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 02:30:00.685422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 02:30:00.685443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.452741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.452944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.452973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 02:30:01.452994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.453016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.453077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.453114 | orchestrator | 2026-04-11 02:30:01.453135 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-11 02:30:01.453148 | orchestrator | Saturday 11 April 2026 02:30:00 +0000 (0:00:03.591) 0:01:25.033 ******** 2026-04-11 02:30:01.453161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 02:30:01.453174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.453186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.453198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 02:30:01.453210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:01.453233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 02:30:08.026433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:30:08.026560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 02:30:08.026578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 02:30:08.026591 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:08.026606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 02:30:08.026619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:30:08.026676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 02:30:08.026691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 02:30:08.026703 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:08.026714 | orchestrator | 2026-04-11 02:30:08.026727 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-11 02:30:08.026739 | orchestrator | Saturday 11 April 2026 02:30:01 +0000 (0:00:00.813) 0:01:25.847 ******** 2026-04-11 02:30:08.026751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-11 02:30:08.026887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-11 02:30:08.026904 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:08.026916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-11 02:30:08.026927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-11 02:30:08.026942 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:08.026955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-11 02:30:08.026968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-11 02:30:08.026981 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:08.026994 | orchestrator | 2026-04-11 02:30:08.027008 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-11 02:30:08.027021 | orchestrator | Saturday 11 April 2026 02:30:02 +0000 (0:00:01.326) 0:01:27.174 ******** 2026-04-11 02:30:08.027034 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:08.027057 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:08.027070 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:08.027083 | orchestrator | 2026-04-11 02:30:08.027096 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-11 02:30:08.027109 | orchestrator | Saturday 11 April 2026 02:30:04 +0000 (0:00:01.319) 0:01:28.494 ******** 2026-04-11 02:30:08.027122 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:08.027136 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:08.027150 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:08.027162 | orchestrator | 2026-04-11 02:30:08.027176 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-11 02:30:08.027204 | orchestrator | Saturday 11 April 2026 02:30:06 +0000 (0:00:02.038) 0:01:30.533 ******** 2026-04-11 02:30:08.027217 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:08.027230 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:08.027254 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:08.027267 | orchestrator | 2026-04-11 02:30:08.027281 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-11 02:30:08.027293 | orchestrator | Saturday 11 April 2026 02:30:06 +0000 (0:00:00.336) 0:01:30.870 ******** 2026-04-11 02:30:08.027306 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:08.027319 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:08.027331 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:08.027342 | orchestrator | 2026-04-11 02:30:08.027353 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-11 02:30:08.027364 | orchestrator | Saturday 11 April 2026 02:30:06 +0000 (0:00:00.366) 0:01:31.236 ******** 2026-04-11 02:30:08.027375 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:30:08.027385 | orchestrator | 2026-04-11 02:30:08.027396 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-11 02:30:08.027407 | orchestrator | Saturday 11 April 2026 02:30:08 +0000 (0:00:01.059) 0:01:32.296 ******** 2026-04-11 02:30:11.471545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 02:30:11.471620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 02:30:11.471631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 02:30:11.471653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 02:30:11.471660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 02:30:11.471681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 02:30:11.471687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 02:30:11.471692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 02:30:11.471698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:30:11.471707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 02:30:11.471713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 02:30:11.471718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 02:30:11.471732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 02:30:12.415217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 02:30:12.415234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415357 | orchestrator | 2026-04-11 02:30:12.415372 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-11 02:30:12.415384 | orchestrator | Saturday 11 April 2026 02:30:11 +0000 (0:00:03.730) 0:01:36.026 ******** 2026-04-11 02:30:12.415397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 02:30:12.415409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 02:30:12.415421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.415455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.904276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.904423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.904450 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:12.904473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 02:30:12.904494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 02:30:12.905185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.905235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.905281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.905315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.905339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.905358 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:12.905377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 02:30:12.905395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 02:30:12.905411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 02:30:12.905450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 02:30:23.321230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 02:30:23.321357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:30:23.321377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 02:30:23.321392 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:23.321402 | orchestrator | 2026-04-11 02:30:23.321410 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-11 02:30:23.321418 | orchestrator | Saturday 11 April 2026 02:30:12 +0000 (0:00:01.146) 0:01:37.173 ******** 2026-04-11 02:30:23.321426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-11 02:30:23.321435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-11 02:30:23.321442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-11 02:30:23.321450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-11 02:30:23.321457 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:23.321464 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:23.321471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-11 02:30:23.321497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-11 02:30:23.321504 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:23.321511 | orchestrator | 2026-04-11 02:30:23.321518 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-11 02:30:23.321525 | orchestrator | Saturday 11 April 2026 02:30:14 +0000 (0:00:01.384) 0:01:38.557 ******** 2026-04-11 02:30:23.321532 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:23.321539 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:23.321545 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:23.321552 | orchestrator | 2026-04-11 02:30:23.321559 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-11 02:30:23.321565 | orchestrator | Saturday 11 April 2026 02:30:15 +0000 (0:00:01.314) 0:01:39.872 ******** 2026-04-11 02:30:23.321572 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:23.321579 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:23.321586 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:23.321598 | orchestrator | 2026-04-11 02:30:23.321609 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-11 02:30:23.321620 | orchestrator | Saturday 11 April 2026 02:30:17 +0000 (0:00:02.072) 0:01:41.945 ******** 2026-04-11 02:30:23.321653 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:23.321668 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:23.321678 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:23.321687 | orchestrator | 2026-04-11 02:30:23.321697 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-11 02:30:23.321708 | orchestrator | Saturday 11 April 2026 02:30:18 +0000 (0:00:00.342) 0:01:42.288 ******** 2026-04-11 02:30:23.321718 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:30:23.321728 | orchestrator | 2026-04-11 02:30:23.321738 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-11 02:30:23.321749 | orchestrator | Saturday 11 April 2026 02:30:19 +0000 (0:00:01.139) 0:01:43.427 ******** 2026-04-11 02:30:23.321796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 02:30:23.321815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 02:30:23.321855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 02:30:26.970532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 02:30:26.970724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 02:30:26.970942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 02:30:26.970991 | orchestrator | 2026-04-11 02:30:26.971012 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-11 02:30:26.971034 | orchestrator | Saturday 11 April 2026 02:30:23 +0000 (0:00:04.285) 0:01:47.713 ******** 2026-04-11 02:30:26.971066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 02:30:26.971105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 02:30:30.965899 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:30.966923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 02:30:30.967001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 02:30:30.967041 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:30.967100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 02:30:30.967135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 02:30:30.967180 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:30.967203 | orchestrator | 2026-04-11 02:30:30.967222 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-11 02:30:30.967242 | orchestrator | Saturday 11 April 2026 02:30:27 +0000 (0:00:03.641) 0:01:51.354 ******** 2026-04-11 02:30:30.967263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 02:30:30.967295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 02:30:39.762683 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:39.762824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 02:30:39.762843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 02:30:39.762856 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:39.762867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 02:30:39.762892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 02:30:39.762903 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:39.762913 | orchestrator | 2026-04-11 02:30:39.762924 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-11 02:30:39.762935 | orchestrator | Saturday 11 April 2026 02:30:30 +0000 (0:00:03.880) 0:01:55.235 ******** 2026-04-11 02:30:39.762966 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:39.762977 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:39.762986 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:39.762996 | orchestrator | 2026-04-11 02:30:39.763005 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-11 02:30:39.763015 | orchestrator | Saturday 11 April 2026 02:30:32 +0000 (0:00:01.356) 0:01:56.592 ******** 2026-04-11 02:30:39.763024 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:39.763034 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:39.763044 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:39.763053 | orchestrator | 2026-04-11 02:30:39.763063 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-11 02:30:39.763072 | orchestrator | Saturday 11 April 2026 02:30:34 +0000 (0:00:02.197) 0:01:58.790 ******** 2026-04-11 02:30:39.763081 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:39.763091 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:39.763100 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:39.763110 | orchestrator | 2026-04-11 02:30:39.763119 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-11 02:30:39.763129 | orchestrator | Saturday 11 April 2026 02:30:34 +0000 (0:00:00.340) 0:01:59.130 ******** 2026-04-11 02:30:39.763138 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:30:39.763148 | orchestrator | 2026-04-11 02:30:39.763161 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-11 02:30:39.763179 | orchestrator | Saturday 11 April 2026 02:30:36 +0000 (0:00:01.173) 0:02:00.303 ******** 2026-04-11 02:30:39.763217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 02:30:39.763240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 02:30:39.763259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 02:30:39.763278 | orchestrator | 2026-04-11 02:30:39.763297 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-11 02:30:39.763325 | orchestrator | Saturday 11 April 2026 02:30:39 +0000 (0:00:03.061) 0:02:03.365 ******** 2026-04-11 02:30:39.763337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 02:30:39.763350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 02:30:39.763361 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:39.763372 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:39.763384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 02:30:39.763466 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:39.763486 | orchestrator | 2026-04-11 02:30:39.763503 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-11 02:30:39.763517 | orchestrator | Saturday 11 April 2026 02:30:39 +0000 (0:00:00.445) 0:02:03.811 ******** 2026-04-11 02:30:39.763544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-11 02:30:39.763577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-11 02:30:49.006976 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:49.007119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-11 02:30:49.007154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-11 02:30:49.007178 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:49.007202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-11 02:30:49.007223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-11 02:30:49.007277 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:49.007300 | orchestrator | 2026-04-11 02:30:49.007321 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-11 02:30:49.007343 | orchestrator | Saturday 11 April 2026 02:30:40 +0000 (0:00:00.940) 0:02:04.752 ******** 2026-04-11 02:30:49.007364 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:49.007384 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:49.007402 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:49.007422 | orchestrator | 2026-04-11 02:30:49.007443 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-11 02:30:49.007463 | orchestrator | Saturday 11 April 2026 02:30:41 +0000 (0:00:01.382) 0:02:06.134 ******** 2026-04-11 02:30:49.007482 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:49.007503 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:49.007523 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:49.007541 | orchestrator | 2026-04-11 02:30:49.007561 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-11 02:30:49.007600 | orchestrator | Saturday 11 April 2026 02:30:43 +0000 (0:00:02.133) 0:02:08.267 ******** 2026-04-11 02:30:49.007621 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:49.007640 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:49.007660 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:49.007679 | orchestrator | 2026-04-11 02:30:49.007700 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-11 02:30:49.007720 | orchestrator | Saturday 11 April 2026 02:30:44 +0000 (0:00:00.354) 0:02:08.622 ******** 2026-04-11 02:30:49.007739 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:30:49.007786 | orchestrator | 2026-04-11 02:30:49.007808 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-11 02:30:49.007827 | orchestrator | Saturday 11 April 2026 02:30:45 +0000 (0:00:01.224) 0:02:09.847 ******** 2026-04-11 02:30:49.007887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 02:30:49.007940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 02:30:49.007978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 02:30:50.720689 | orchestrator | 2026-04-11 02:30:50.720938 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-11 02:30:50.720970 | orchestrator | Saturday 11 April 2026 02:30:48 +0000 (0:00:03.426) 0:02:13.273 ******** 2026-04-11 02:30:50.721022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 02:30:50.721049 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:50.721099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 02:30:50.721157 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:50.721189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 02:30:50.721210 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:50.721232 | orchestrator | 2026-04-11 02:30:50.721252 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-11 02:30:50.721272 | orchestrator | Saturday 11 April 2026 02:30:49 +0000 (0:00:00.679) 0:02:13.953 ******** 2026-04-11 02:30:50.721292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-11 02:30:50.721327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 02:30:50.721351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-11 02:30:50.721385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 02:30:59.894674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 02:30:59.894879 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:59.894914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-11 02:30:59.894939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 02:30:59.894985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-11 02:30:59.895010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 02:30:59.895028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 02:30:59.895039 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:59.895050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-11 02:30:59.895062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 02:30:59.895073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-11 02:30:59.895109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 02:30:59.895120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 02:30:59.895131 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:59.895142 | orchestrator | 2026-04-11 02:30:59.895154 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-11 02:30:59.895167 | orchestrator | Saturday 11 April 2026 02:30:50 +0000 (0:00:01.036) 0:02:14.989 ******** 2026-04-11 02:30:59.895178 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:59.895188 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:59.895199 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:59.895210 | orchestrator | 2026-04-11 02:30:59.895221 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-11 02:30:59.895231 | orchestrator | Saturday 11 April 2026 02:30:52 +0000 (0:00:01.719) 0:02:16.709 ******** 2026-04-11 02:30:59.895243 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:30:59.895254 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:30:59.895265 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:30:59.895275 | orchestrator | 2026-04-11 02:30:59.895286 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-11 02:30:59.895297 | orchestrator | Saturday 11 April 2026 02:30:54 +0000 (0:00:02.173) 0:02:18.882 ******** 2026-04-11 02:30:59.895308 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:59.895319 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:59.895350 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:59.895362 | orchestrator | 2026-04-11 02:30:59.895373 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-11 02:30:59.895383 | orchestrator | Saturday 11 April 2026 02:30:54 +0000 (0:00:00.345) 0:02:19.227 ******** 2026-04-11 02:30:59.895394 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:30:59.895405 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:30:59.895416 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:30:59.895426 | orchestrator | 2026-04-11 02:30:59.895437 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-11 02:30:59.895448 | orchestrator | Saturday 11 April 2026 02:30:55 +0000 (0:00:00.346) 0:02:19.574 ******** 2026-04-11 02:30:59.895459 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:30:59.895469 | orchestrator | 2026-04-11 02:30:59.895480 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-11 02:30:59.895491 | orchestrator | Saturday 11 April 2026 02:30:56 +0000 (0:00:01.215) 0:02:20.789 ******** 2026-04-11 02:30:59.895514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 02:30:59.895540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 02:30:59.895553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 02:30:59.895566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 02:30:59.895587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 02:31:00.533054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 02:31:00.533173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 02:31:00.533231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 02:31:00.533252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 02:31:00.533269 | orchestrator | 2026-04-11 02:31:00.533287 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-11 02:31:00.533304 | orchestrator | Saturday 11 April 2026 02:30:59 +0000 (0:00:03.370) 0:02:24.160 ******** 2026-04-11 02:31:00.533346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 02:31:00.533375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 02:31:00.533393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 02:31:00.533421 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:00.533441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 02:31:00.533458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 02:31:00.533475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 02:31:00.533491 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:00.533561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 02:31:10.312049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 02:31:10.312132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 02:31:10.312142 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:10.312150 | orchestrator | 2026-04-11 02:31:10.312156 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-11 02:31:10.312163 | orchestrator | Saturday 11 April 2026 02:31:00 +0000 (0:00:00.633) 0:02:24.793 ******** 2026-04-11 02:31:10.312170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-11 02:31:10.312179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-11 02:31:10.312186 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:10.312192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-11 02:31:10.312198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-11 02:31:10.312204 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:10.312209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-11 02:31:10.312215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-11 02:31:10.312220 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:10.312226 | orchestrator | 2026-04-11 02:31:10.312231 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-11 02:31:10.312237 | orchestrator | Saturday 11 April 2026 02:31:01 +0000 (0:00:01.244) 0:02:26.037 ******** 2026-04-11 02:31:10.312243 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:31:10.312248 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:31:10.312291 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:31:10.312297 | orchestrator | 2026-04-11 02:31:10.312303 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-11 02:31:10.312308 | orchestrator | Saturday 11 April 2026 02:31:03 +0000 (0:00:01.339) 0:02:27.377 ******** 2026-04-11 02:31:10.312313 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:31:10.312319 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:31:10.312324 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:31:10.312329 | orchestrator | 2026-04-11 02:31:10.312335 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-11 02:31:10.312340 | orchestrator | Saturday 11 April 2026 02:31:05 +0000 (0:00:02.142) 0:02:29.519 ******** 2026-04-11 02:31:10.312345 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:10.312362 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:10.312368 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:10.312374 | orchestrator | 2026-04-11 02:31:10.312379 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-11 02:31:10.312396 | orchestrator | Saturday 11 April 2026 02:31:05 +0000 (0:00:00.347) 0:02:29.867 ******** 2026-04-11 02:31:10.312402 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:31:10.312407 | orchestrator | 2026-04-11 02:31:10.312412 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-11 02:31:10.312418 | orchestrator | Saturday 11 April 2026 02:31:06 +0000 (0:00:01.287) 0:02:31.154 ******** 2026-04-11 02:31:10.312425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 02:31:10.312434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:31:10.312440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 02:31:10.312452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:31:10.312464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 02:31:15.709524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:31:15.709633 | orchestrator | 2026-04-11 02:31:15.709652 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-11 02:31:15.709666 | orchestrator | Saturday 11 April 2026 02:31:10 +0000 (0:00:03.424) 0:02:34.579 ******** 2026-04-11 02:31:15.709680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 02:31:15.709824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:31:15.709868 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:15.709904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 02:31:15.709950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:31:15.709963 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:15.709975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 02:31:15.709987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:31:15.710006 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:15.710121 | orchestrator | 2026-04-11 02:31:15.710146 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-11 02:31:15.710165 | orchestrator | Saturday 11 April 2026 02:31:10 +0000 (0:00:00.680) 0:02:35.260 ******** 2026-04-11 02:31:15.710185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-11 02:31:15.710206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-11 02:31:15.710226 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:15.710243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-11 02:31:15.710262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-11 02:31:15.710280 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:15.710298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-11 02:31:15.710315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-11 02:31:15.710333 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:15.710351 | orchestrator | 2026-04-11 02:31:15.710378 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-11 02:31:15.710396 | orchestrator | Saturday 11 April 2026 02:31:11 +0000 (0:00:00.926) 0:02:36.186 ******** 2026-04-11 02:31:15.710414 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:31:15.710434 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:31:15.710454 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:31:15.710472 | orchestrator | 2026-04-11 02:31:15.710490 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-11 02:31:15.710510 | orchestrator | Saturday 11 April 2026 02:31:13 +0000 (0:00:01.678) 0:02:37.864 ******** 2026-04-11 02:31:15.710530 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:31:15.710548 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:31:15.710568 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:31:15.710587 | orchestrator | 2026-04-11 02:31:15.710607 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-11 02:31:15.710635 | orchestrator | Saturday 11 April 2026 02:31:15 +0000 (0:00:02.109) 0:02:39.974 ******** 2026-04-11 02:31:20.647986 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:31:20.648090 | orchestrator | 2026-04-11 02:31:20.648107 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-11 02:31:20.648118 | orchestrator | Saturday 11 April 2026 02:31:16 +0000 (0:00:01.144) 0:02:41.119 ******** 2026-04-11 02:31:20.648134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 02:31:20.648174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 02:31:20.648188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:31:20.648200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 02:31:20.648228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:31:20.648259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 02:31:20.648271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 02:31:20.648290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 02:31:20.648301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 02:31:20.648312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:31:20.648329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 02:31:20.648349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644487 | orchestrator | 2026-04-11 02:31:21.644559 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-11 02:31:21.644567 | orchestrator | Saturday 11 April 2026 02:31:20 +0000 (0:00:03.869) 0:02:44.988 ******** 2026-04-11 02:31:21.644588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 02:31:21.644595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644611 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:21.644625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 02:31:21.644640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644656 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:21.644660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 02:31:21.644664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 02:31:21.644679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 02:31:33.734087 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:33.734218 | orchestrator | 2026-04-11 02:31:33.734244 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-11 02:31:33.734263 | orchestrator | Saturday 11 April 2026 02:31:21 +0000 (0:00:01.019) 0:02:46.008 ******** 2026-04-11 02:31:33.734275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-11 02:31:33.734287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-11 02:31:33.734299 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:33.734310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-11 02:31:33.734320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-11 02:31:33.734330 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:33.734340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-11 02:31:33.734350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-11 02:31:33.734360 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:33.734370 | orchestrator | 2026-04-11 02:31:33.734379 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-11 02:31:33.734389 | orchestrator | Saturday 11 April 2026 02:31:22 +0000 (0:00:01.101) 0:02:47.109 ******** 2026-04-11 02:31:33.734399 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:31:33.734409 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:31:33.734419 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:31:33.734429 | orchestrator | 2026-04-11 02:31:33.734438 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-11 02:31:33.734448 | orchestrator | Saturday 11 April 2026 02:31:24 +0000 (0:00:01.335) 0:02:48.445 ******** 2026-04-11 02:31:33.734458 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:31:33.734467 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:31:33.734477 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:31:33.734487 | orchestrator | 2026-04-11 02:31:33.734497 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-11 02:31:33.734506 | orchestrator | Saturday 11 April 2026 02:31:26 +0000 (0:00:02.185) 0:02:50.630 ******** 2026-04-11 02:31:33.734516 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:31:33.734526 | orchestrator | 2026-04-11 02:31:33.734536 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-11 02:31:33.734545 | orchestrator | Saturday 11 April 2026 02:31:27 +0000 (0:00:01.464) 0:02:52.095 ******** 2026-04-11 02:31:33.734556 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 02:31:33.734568 | orchestrator | 2026-04-11 02:31:33.734579 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-11 02:31:33.734615 | orchestrator | Saturday 11 April 2026 02:31:30 +0000 (0:00:03.067) 0:02:55.163 ******** 2026-04-11 02:31:33.734668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:31:33.734686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 02:31:33.734699 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:33.734716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:31:33.734774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 02:31:33.734794 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:33.734817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:31:36.324186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 02:31:36.324285 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:36.324299 | orchestrator | 2026-04-11 02:31:36.324308 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-11 02:31:36.324317 | orchestrator | Saturday 11 April 2026 02:31:33 +0000 (0:00:02.833) 0:02:57.996 ******** 2026-04-11 02:31:36.324366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:31:36.324376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 02:31:36.324384 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:36.324409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:31:36.324433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 02:31:36.324441 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:36.324448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:31:36.324462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 02:31:46.787179 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:46.787253 | orchestrator | 2026-04-11 02:31:46.787261 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-11 02:31:46.787267 | orchestrator | Saturday 11 April 2026 02:31:36 +0000 (0:00:02.589) 0:03:00.585 ******** 2026-04-11 02:31:46.787273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 02:31:46.787297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 02:31:46.787313 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:46.787317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 02:31:46.787322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 02:31:46.787326 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:46.787331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 02:31:46.787335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 02:31:46.787339 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:46.787344 | orchestrator | 2026-04-11 02:31:46.787348 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-11 02:31:46.787352 | orchestrator | Saturday 11 April 2026 02:31:39 +0000 (0:00:03.188) 0:03:03.774 ******** 2026-04-11 02:31:46.787357 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:31:46.787376 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:31:46.787381 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:31:46.787385 | orchestrator | 2026-04-11 02:31:46.787389 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-11 02:31:46.787394 | orchestrator | Saturday 11 April 2026 02:31:41 +0000 (0:00:02.157) 0:03:05.932 ******** 2026-04-11 02:31:46.787398 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:46.787402 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:46.787406 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:46.787410 | orchestrator | 2026-04-11 02:31:46.787415 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-11 02:31:46.787419 | orchestrator | Saturday 11 April 2026 02:31:43 +0000 (0:00:01.563) 0:03:07.495 ******** 2026-04-11 02:31:46.787423 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:46.787427 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:46.787431 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:46.787435 | orchestrator | 2026-04-11 02:31:46.787440 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-11 02:31:46.787444 | orchestrator | Saturday 11 April 2026 02:31:43 +0000 (0:00:00.341) 0:03:07.837 ******** 2026-04-11 02:31:46.787448 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:31:46.787452 | orchestrator | 2026-04-11 02:31:46.787456 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-11 02:31:46.787461 | orchestrator | Saturday 11 April 2026 02:31:45 +0000 (0:00:01.491) 0:03:09.329 ******** 2026-04-11 02:31:46.787469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 02:31:46.787477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 02:31:46.787481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 02:31:46.787486 | orchestrator | 2026-04-11 02:31:46.787490 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-11 02:31:46.787499 | orchestrator | Saturday 11 April 2026 02:31:46 +0000 (0:00:01.526) 0:03:10.855 ******** 2026-04-11 02:31:46.787506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 02:31:55.814843 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:55.814955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 02:31:55.814974 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:55.814982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 02:31:55.814989 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:55.814995 | orchestrator | 2026-04-11 02:31:55.815002 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-11 02:31:55.815010 | orchestrator | Saturday 11 April 2026 02:31:46 +0000 (0:00:00.405) 0:03:11.261 ******** 2026-04-11 02:31:55.815018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 02:31:55.815030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 02:31:55.815041 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:55.815051 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:55.815061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 02:31:55.815091 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:55.815098 | orchestrator | 2026-04-11 02:31:55.815167 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-11 02:31:55.815184 | orchestrator | Saturday 11 April 2026 02:31:47 +0000 (0:00:00.921) 0:03:12.183 ******** 2026-04-11 02:31:55.815194 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:55.815204 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:55.815214 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:55.815221 | orchestrator | 2026-04-11 02:31:55.815226 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-11 02:31:55.815232 | orchestrator | Saturday 11 April 2026 02:31:48 +0000 (0:00:00.499) 0:03:12.683 ******** 2026-04-11 02:31:55.815238 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:55.815244 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:55.815249 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:55.815255 | orchestrator | 2026-04-11 02:31:55.815261 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-11 02:31:55.815267 | orchestrator | Saturday 11 April 2026 02:31:49 +0000 (0:00:01.371) 0:03:14.054 ******** 2026-04-11 02:31:55.815273 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:55.815278 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:55.815287 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:31:55.815296 | orchestrator | 2026-04-11 02:31:55.815307 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-11 02:31:55.815314 | orchestrator | Saturday 11 April 2026 02:31:50 +0000 (0:00:00.327) 0:03:14.382 ******** 2026-04-11 02:31:55.815320 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:31:55.815326 | orchestrator | 2026-04-11 02:31:55.815331 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-11 02:31:55.815337 | orchestrator | Saturday 11 April 2026 02:31:51 +0000 (0:00:01.784) 0:03:16.166 ******** 2026-04-11 02:31:55.815358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 02:31:55.815372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 02:31:55.815379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:55.815392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:55.815399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:55.815410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.028837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.029036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.029096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-11 02:31:56.029120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-11 02:31:56.029139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.029188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.029223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:56.029247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:56.029316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.029339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:56.029358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.029376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-11 02:31:56.029408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:56.182737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.182895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:56.182918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 02:31:56.182929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:56.182937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:56.182944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.182973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:56.182990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.182998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 02:31:56.183006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-11 02:31:56.183014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.183021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:56.183042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.471063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.471138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 02:31:56.471148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.471155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:56.471172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-11 02:31:56.471209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.471216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:56.471222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:56.471227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.471233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:56.471241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:56.471250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-11 02:31:56.471259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:57.653405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.653547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 02:31:57.653580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:57.653603 | orchestrator | 2026-04-11 02:31:57.653627 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-11 02:31:57.653683 | orchestrator | Saturday 11 April 2026 02:31:56 +0000 (0:00:04.573) 0:03:20.740 ******** 2026-04-11 02:31:57.653714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 02:31:57.653829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.653848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.653860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.653872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-11 02:31:57.653899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 02:31:57.653912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.653934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.745158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:57.745260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.745278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:57.745331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.745344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.745357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:57.745387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-11 02:31:57.745401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.745453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.745470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-11 02:31:57.745482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:57.745495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:57.745507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:57.745526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.864683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.864846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 02:31:57.864879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:57.864889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:57.864899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 02:31:57.864908 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:31:57.864934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.864948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.864956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.864964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-11 02:31:57.864972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:57.864980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:57.865055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:58.145118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-11 02:31:58.145229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 02:31:58.145249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:58.145262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:58.145275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:58.145305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:58.145316 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:31:58.145345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:58.145362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:31:58.145373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:58.145383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-11 02:31:58.145394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-11 02:31:58.145404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 02:31:58.145429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 02:32:09.426126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 02:32:09.426239 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:09.426256 | orchestrator | 2026-04-11 02:32:09.426269 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-11 02:32:09.426282 | orchestrator | Saturday 11 April 2026 02:31:58 +0000 (0:00:01.672) 0:03:22.412 ******** 2026-04-11 02:32:09.426294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-11 02:32:09.426308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-11 02:32:09.426320 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:09.426331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-11 02:32:09.426343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-11 02:32:09.426353 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:09.426365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-11 02:32:09.426376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-11 02:32:09.426410 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:09.426422 | orchestrator | 2026-04-11 02:32:09.426433 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-11 02:32:09.426444 | orchestrator | Saturday 11 April 2026 02:32:00 +0000 (0:00:02.316) 0:03:24.729 ******** 2026-04-11 02:32:09.426455 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:32:09.426466 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:32:09.426478 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:32:09.426489 | orchestrator | 2026-04-11 02:32:09.426500 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-11 02:32:09.426511 | orchestrator | Saturday 11 April 2026 02:32:01 +0000 (0:00:01.381) 0:03:26.111 ******** 2026-04-11 02:32:09.426522 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:32:09.426533 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:32:09.426544 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:32:09.426555 | orchestrator | 2026-04-11 02:32:09.426565 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-11 02:32:09.426579 | orchestrator | Saturday 11 April 2026 02:32:03 +0000 (0:00:02.169) 0:03:28.280 ******** 2026-04-11 02:32:09.426592 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:32:09.426605 | orchestrator | 2026-04-11 02:32:09.426617 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-11 02:32:09.426630 | orchestrator | Saturday 11 April 2026 02:32:05 +0000 (0:00:01.335) 0:03:29.615 ******** 2026-04-11 02:32:09.426646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 02:32:09.426686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 02:32:09.426701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 02:32:09.426723 | orchestrator | 2026-04-11 02:32:09.426737 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-11 02:32:09.426778 | orchestrator | Saturday 11 April 2026 02:32:08 +0000 (0:00:03.510) 0:03:33.125 ******** 2026-04-11 02:32:09.426792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 02:32:09.426807 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:09.426820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 02:32:09.426833 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:09.426861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 02:32:20.345867 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:20.345952 | orchestrator | 2026-04-11 02:32:20.345961 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-11 02:32:20.345970 | orchestrator | Saturday 11 April 2026 02:32:09 +0000 (0:00:00.568) 0:03:33.693 ******** 2026-04-11 02:32:20.345976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-11 02:32:20.346009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-11 02:32:20.346068 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:20.346078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-11 02:32:20.346088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-11 02:32:20.346097 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:20.346105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-11 02:32:20.346110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-11 02:32:20.346115 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:20.346121 | orchestrator | 2026-04-11 02:32:20.346128 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-11 02:32:20.346137 | orchestrator | Saturday 11 April 2026 02:32:10 +0000 (0:00:00.825) 0:03:34.519 ******** 2026-04-11 02:32:20.346146 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:32:20.346154 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:32:20.346161 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:32:20.346169 | orchestrator | 2026-04-11 02:32:20.346177 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-11 02:32:20.346184 | orchestrator | Saturday 11 April 2026 02:32:12 +0000 (0:00:02.026) 0:03:36.545 ******** 2026-04-11 02:32:20.346191 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:32:20.346199 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:32:20.346206 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:32:20.346213 | orchestrator | 2026-04-11 02:32:20.346221 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-11 02:32:20.346230 | orchestrator | Saturday 11 April 2026 02:32:14 +0000 (0:00:01.893) 0:03:38.439 ******** 2026-04-11 02:32:20.346239 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:32:20.346247 | orchestrator | 2026-04-11 02:32:20.346254 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-11 02:32:20.346262 | orchestrator | Saturday 11 April 2026 02:32:15 +0000 (0:00:01.674) 0:03:40.113 ******** 2026-04-11 02:32:20.346276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 02:32:20.346327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:32:20.346339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:32:20.346349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 02:32:20.346358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 02:32:20.346378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:32:20.346394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:32:21.372879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:32:21.372965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:32:21.372979 | orchestrator | 2026-04-11 02:32:21.372989 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-11 02:32:21.372999 | orchestrator | Saturday 11 April 2026 02:32:20 +0000 (0:00:04.498) 0:03:44.612 ******** 2026-04-11 02:32:21.373012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 02:32:21.373042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:32:21.373064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:32:21.373073 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:21.373100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 02:32:21.373111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:32:21.373130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:32:21.373139 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:21.373152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 02:32:21.373172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 02:32:34.790288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 02:32:34.790381 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:34.790388 | orchestrator | 2026-04-11 02:32:34.790393 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-11 02:32:34.790398 | orchestrator | Saturday 11 April 2026 02:32:21 +0000 (0:00:01.022) 0:03:45.634 ******** 2026-04-11 02:32:34.790404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790427 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:34.790431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790461 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:34.790465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-11 02:32:34.790490 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:34.790496 | orchestrator | 2026-04-11 02:32:34.790502 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-11 02:32:34.790508 | orchestrator | Saturday 11 April 2026 02:32:22 +0000 (0:00:01.322) 0:03:46.956 ******** 2026-04-11 02:32:34.790513 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:32:34.790518 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:32:34.790524 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:32:34.790529 | orchestrator | 2026-04-11 02:32:34.790534 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-11 02:32:34.790540 | orchestrator | Saturday 11 April 2026 02:32:24 +0000 (0:00:01.379) 0:03:48.335 ******** 2026-04-11 02:32:34.790545 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:32:34.790551 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:32:34.790567 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:32:34.790574 | orchestrator | 2026-04-11 02:32:34.790579 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-11 02:32:34.790585 | orchestrator | Saturday 11 April 2026 02:32:26 +0000 (0:00:02.263) 0:03:50.598 ******** 2026-04-11 02:32:34.790591 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:32:34.790597 | orchestrator | 2026-04-11 02:32:34.790603 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-11 02:32:34.790608 | orchestrator | Saturday 11 April 2026 02:32:28 +0000 (0:00:01.709) 0:03:52.308 ******** 2026-04-11 02:32:34.790614 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-11 02:32:34.790621 | orchestrator | 2026-04-11 02:32:34.790627 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-11 02:32:34.790633 | orchestrator | Saturday 11 April 2026 02:32:28 +0000 (0:00:00.920) 0:03:53.228 ******** 2026-04-11 02:32:34.790641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 02:32:34.790660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 02:32:34.790667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 02:32:34.790673 | orchestrator | 2026-04-11 02:32:34.790680 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-11 02:32:34.790686 | orchestrator | Saturday 11 April 2026 02:32:33 +0000 (0:00:04.269) 0:03:57.498 ******** 2026-04-11 02:32:34.790692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:34.790698 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:34.790708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:34.790715 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:34.790721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:34.790731 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:56.556886 | orchestrator | 2026-04-11 02:32:56.556974 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-11 02:32:56.556986 | orchestrator | Saturday 11 April 2026 02:32:34 +0000 (0:00:01.555) 0:03:59.054 ******** 2026-04-11 02:32:56.556995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 02:32:56.557006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 02:32:56.557034 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:56.557041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 02:32:56.557045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 02:32:56.557049 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:56.557053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 02:32:56.557057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 02:32:56.557061 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:56.557065 | orchestrator | 2026-04-11 02:32:56.557069 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 02:32:56.557073 | orchestrator | Saturday 11 April 2026 02:32:36 +0000 (0:00:01.630) 0:04:00.685 ******** 2026-04-11 02:32:56.557077 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:32:56.557081 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:32:56.557084 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:32:56.557088 | orchestrator | 2026-04-11 02:32:56.557092 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 02:32:56.557096 | orchestrator | Saturday 11 April 2026 02:32:38 +0000 (0:00:02.555) 0:04:03.241 ******** 2026-04-11 02:32:56.557099 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:32:56.557103 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:32:56.557107 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:32:56.557111 | orchestrator | 2026-04-11 02:32:56.557114 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-11 02:32:56.557118 | orchestrator | Saturday 11 April 2026 02:32:42 +0000 (0:00:03.058) 0:04:06.300 ******** 2026-04-11 02:32:56.557123 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-11 02:32:56.557128 | orchestrator | 2026-04-11 02:32:56.557132 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-11 02:32:56.557136 | orchestrator | Saturday 11 April 2026 02:32:43 +0000 (0:00:01.352) 0:04:07.652 ******** 2026-04-11 02:32:56.557152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:56.557158 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:56.557162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:56.557170 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:56.557186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:56.557191 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:56.557194 | orchestrator | 2026-04-11 02:32:56.557198 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-11 02:32:56.557202 | orchestrator | Saturday 11 April 2026 02:32:44 +0000 (0:00:01.106) 0:04:08.758 ******** 2026-04-11 02:32:56.557206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:56.557210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:56.557214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:56.557218 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:56.557221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 02:32:56.557225 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:56.557229 | orchestrator | 2026-04-11 02:32:56.557233 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-11 02:32:56.557237 | orchestrator | Saturday 11 April 2026 02:32:46 +0000 (0:00:01.687) 0:04:10.446 ******** 2026-04-11 02:32:56.557241 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:32:56.557244 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:32:56.557248 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:32:56.557252 | orchestrator | 2026-04-11 02:32:56.557256 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 02:32:56.557259 | orchestrator | Saturday 11 April 2026 02:32:47 +0000 (0:00:01.767) 0:04:12.214 ******** 2026-04-11 02:32:56.557263 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:32:56.557268 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:32:56.557272 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:32:56.557275 | orchestrator | 2026-04-11 02:32:56.557279 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 02:32:56.557283 | orchestrator | Saturday 11 April 2026 02:32:50 +0000 (0:00:02.898) 0:04:15.112 ******** 2026-04-11 02:32:56.557292 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:32:56.557295 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:32:56.557299 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:32:56.557303 | orchestrator | 2026-04-11 02:32:56.557309 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-11 02:32:56.557313 | orchestrator | Saturday 11 April 2026 02:32:53 +0000 (0:00:02.935) 0:04:18.047 ******** 2026-04-11 02:32:56.557317 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-11 02:32:56.557321 | orchestrator | 2026-04-11 02:32:56.557325 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-11 02:32:56.557329 | orchestrator | Saturday 11 April 2026 02:32:55 +0000 (0:00:01.326) 0:04:19.374 ******** 2026-04-11 02:32:56.557337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 02:33:11.865716 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:11.865880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 02:33:11.865896 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:11.865906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 02:33:11.865915 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:11.865923 | orchestrator | 2026-04-11 02:33:11.865933 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-11 02:33:11.865942 | orchestrator | Saturday 11 April 2026 02:32:56 +0000 (0:00:01.452) 0:04:20.827 ******** 2026-04-11 02:33:11.865952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 02:33:11.865960 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:11.865969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 02:33:11.866000 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:11.866009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 02:33:11.866062 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:11.866072 | orchestrator | 2026-04-11 02:33:11.866092 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-11 02:33:11.866101 | orchestrator | Saturday 11 April 2026 02:32:58 +0000 (0:00:01.453) 0:04:22.281 ******** 2026-04-11 02:33:11.866110 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:11.866118 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:11.866126 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:11.866134 | orchestrator | 2026-04-11 02:33:11.866142 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 02:33:11.866150 | orchestrator | Saturday 11 April 2026 02:33:00 +0000 (0:00:02.037) 0:04:24.318 ******** 2026-04-11 02:33:11.866158 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:33:11.866167 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:33:11.866175 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:33:11.866183 | orchestrator | 2026-04-11 02:33:11.866191 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 02:33:11.866199 | orchestrator | Saturday 11 April 2026 02:33:02 +0000 (0:00:02.552) 0:04:26.871 ******** 2026-04-11 02:33:11.866207 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:33:11.866214 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:33:11.866222 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:33:11.866230 | orchestrator | 2026-04-11 02:33:11.866238 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-11 02:33:11.866247 | orchestrator | Saturday 11 April 2026 02:33:06 +0000 (0:00:03.709) 0:04:30.580 ******** 2026-04-11 02:33:11.866269 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:33:11.866279 | orchestrator | 2026-04-11 02:33:11.866289 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-11 02:33:11.866298 | orchestrator | Saturday 11 April 2026 02:33:07 +0000 (0:00:01.577) 0:04:32.158 ******** 2026-04-11 02:33:11.866308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:11.866319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 02:33:11.866337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 02:33:11.866348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 02:33:11.866364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:33:11.866381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:12.718670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 02:33:12.718867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 02:33:12.718907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 02:33:12.718918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:33:12.718927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:12.718936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 02:33:12.718963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 02:33:12.718972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 02:33:12.719059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:33:12.719076 | orchestrator | 2026-04-11 02:33:12.719086 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-11 02:33:12.719095 | orchestrator | Saturday 11 April 2026 02:33:12 +0000 (0:00:04.128) 0:04:36.286 ******** 2026-04-11 02:33:12.719110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 02:33:12.719119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 02:33:12.719128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 02:33:12.719146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 02:33:13.913303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:33:13.913406 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:13.913417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 02:33:13.913424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 02:33:13.913439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 02:33:13.913445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 02:33:13.913450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:33:13.913457 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:13.913473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 02:33:13.913478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 02:33:13.913482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 02:33:13.913489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 02:33:13.913493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 02:33:13.913497 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:13.913501 | orchestrator | 2026-04-11 02:33:13.913505 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-11 02:33:13.913510 | orchestrator | Saturday 11 April 2026 02:33:12 +0000 (0:00:00.856) 0:04:37.143 ******** 2026-04-11 02:33:13.913516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 02:33:13.913526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 02:33:13.913531 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:13.913538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 02:33:26.215018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 02:33:26.215118 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:26.215132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 02:33:26.215142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 02:33:26.215151 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:26.215159 | orchestrator | 2026-04-11 02:33:26.215168 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-11 02:33:26.215178 | orchestrator | Saturday 11 April 2026 02:33:13 +0000 (0:00:01.037) 0:04:38.180 ******** 2026-04-11 02:33:26.215186 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:33:26.215194 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:33:26.215202 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:33:26.215210 | orchestrator | 2026-04-11 02:33:26.215217 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-11 02:33:26.215224 | orchestrator | Saturday 11 April 2026 02:33:15 +0000 (0:00:01.762) 0:04:39.943 ******** 2026-04-11 02:33:26.215231 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:33:26.215239 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:33:26.215247 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:33:26.215254 | orchestrator | 2026-04-11 02:33:26.215261 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-11 02:33:26.215270 | orchestrator | Saturday 11 April 2026 02:33:17 +0000 (0:00:02.250) 0:04:42.194 ******** 2026-04-11 02:33:26.215278 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:33:26.215286 | orchestrator | 2026-04-11 02:33:26.215294 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-11 02:33:26.215301 | orchestrator | Saturday 11 April 2026 02:33:19 +0000 (0:00:01.515) 0:04:43.710 ******** 2026-04-11 02:33:26.215321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:33:26.215333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:33:26.215375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:33:26.215386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:33:26.215400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:33:26.215411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:33:26.215426 | orchestrator | 2026-04-11 02:33:26.215434 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-11 02:33:26.215441 | orchestrator | Saturday 11 April 2026 02:33:25 +0000 (0:00:05.620) 0:04:49.330 ******** 2026-04-11 02:33:26.215456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:33:31.601519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:33:31.601596 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:31.601618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:33:31.601626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:33:31.601647 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:31.601653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:33:31.601671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:33:31.601676 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:31.601681 | orchestrator | 2026-04-11 02:33:31.601687 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-11 02:33:31.601693 | orchestrator | Saturday 11 April 2026 02:33:26 +0000 (0:00:01.149) 0:04:50.480 ******** 2026-04-11 02:33:31.601699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-11 02:33:31.601706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-11 02:33:31.601714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-11 02:33:31.601726 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:31.601806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-11 02:33:31.601815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-11 02:33:31.601820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-11 02:33:31.601825 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:31.601829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-11 02:33:31.601834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-11 02:33:31.601839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-11 02:33:31.601844 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:31.601848 | orchestrator | 2026-04-11 02:33:31.601853 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-11 02:33:31.601857 | orchestrator | Saturday 11 April 2026 02:33:27 +0000 (0:00:01.057) 0:04:51.537 ******** 2026-04-11 02:33:31.601862 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:31.601867 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:31.601871 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:31.601876 | orchestrator | 2026-04-11 02:33:31.601903 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-11 02:33:31.601908 | orchestrator | Saturday 11 April 2026 02:33:27 +0000 (0:00:00.491) 0:04:52.029 ******** 2026-04-11 02:33:31.601912 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:31.601917 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:31.601922 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:31.601926 | orchestrator | 2026-04-11 02:33:31.601931 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-11 02:33:31.601935 | orchestrator | Saturday 11 April 2026 02:33:29 +0000 (0:00:01.937) 0:04:53.966 ******** 2026-04-11 02:33:31.601946 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:33:34.315029 | orchestrator | 2026-04-11 02:33:34.315102 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-11 02:33:34.315110 | orchestrator | Saturday 11 April 2026 02:33:31 +0000 (0:00:01.903) 0:04:55.869 ******** 2026-04-11 02:33:34.315117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-11 02:33:34.315143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 02:33:34.315159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:34.315164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:34.315169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 02:33:34.315176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-11 02:33:34.315196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 02:33:34.315205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:34.315220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:34.315227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 02:33:34.315237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-11 02:33:34.315244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 02:33:34.315250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:34.315261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:35.974176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 02:33:35.974334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-11 02:33:35.974382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-11 02:33:35.974401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:35.974418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:35.974432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 02:33:35.974472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-11 02:33:35.974517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-11 02:33:35.974533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:35.974543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-11 02:33:35.974554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:35.974571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-11 02:33:36.804963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 02:33:36.805080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:36.805110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:36.805120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 02:33:36.805128 | orchestrator | 2026-04-11 02:33:36.805138 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-11 02:33:36.805147 | orchestrator | Saturday 11 April 2026 02:33:36 +0000 (0:00:04.540) 0:05:00.410 ******** 2026-04-11 02:33:36.805155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-11 02:33:36.805164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 02:33:36.805209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-11 02:33:36.805218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:36.805230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 02:33:36.805238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:36.805246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:36.805254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:36.805262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 02:33:36.805278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 02:33:36.805294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-11 02:33:37.034364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-11 02:33:37.034463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-11 02:33:37.034480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-11 02:33:37.034516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:37.034528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:37.034557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:37.034573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:37.034583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 02:33:37.034593 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:37.034604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 02:33:37.034612 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:37.034622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-11 02:33:37.034639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 02:33:37.034648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:37.034664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:43.042550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 02:33:43.042680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-11 02:33:43.042703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-11 02:33:43.042850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:43.042870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 02:33:43.042882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 02:33:43.042895 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:43.042908 | orchestrator | 2026-04-11 02:33:43.042945 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-11 02:33:43.042959 | orchestrator | Saturday 11 April 2026 02:33:37 +0000 (0:00:01.058) 0:05:01.468 ******** 2026-04-11 02:33:43.042980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-11 02:33:43.042994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-11 02:33:43.043009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-11 02:33:43.043022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-11 02:33:43.043035 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:43.043049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-11 02:33:43.043072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-11 02:33:43.043086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-11 02:33:43.043100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-11 02:33:43.043113 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:43.043126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-11 02:33:43.043139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-11 02:33:43.043153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-11 02:33:43.043166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-11 02:33:43.043179 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:43.043192 | orchestrator | 2026-04-11 02:33:43.043205 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-11 02:33:43.043219 | orchestrator | Saturday 11 April 2026 02:33:39 +0000 (0:00:02.002) 0:05:03.471 ******** 2026-04-11 02:33:43.043231 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:43.043244 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:43.043257 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:43.043269 | orchestrator | 2026-04-11 02:33:43.043282 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-11 02:33:43.043296 | orchestrator | Saturday 11 April 2026 02:33:39 +0000 (0:00:00.487) 0:05:03.958 ******** 2026-04-11 02:33:43.043308 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:43.043322 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:43.043335 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:43.043346 | orchestrator | 2026-04-11 02:33:43.043358 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-11 02:33:43.043369 | orchestrator | Saturday 11 April 2026 02:33:41 +0000 (0:00:01.443) 0:05:05.402 ******** 2026-04-11 02:33:43.043387 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:33:56.831441 | orchestrator | 2026-04-11 02:33:56.831545 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-11 02:33:56.831560 | orchestrator | Saturday 11 April 2026 02:33:43 +0000 (0:00:01.909) 0:05:07.311 ******** 2026-04-11 02:33:56.831574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:33:56.831611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:33:56.831673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:33:56.831685 | orchestrator | 2026-04-11 02:33:56.831690 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-11 02:33:56.831696 | orchestrator | Saturday 11 April 2026 02:33:45 +0000 (0:00:02.269) 0:05:09.581 ******** 2026-04-11 02:33:56.831720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 02:33:56.831754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 02:33:56.831764 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:56.831771 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:56.831776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 02:33:56.831782 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:56.831787 | orchestrator | 2026-04-11 02:33:56.831792 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-11 02:33:56.831798 | orchestrator | Saturday 11 April 2026 02:33:45 +0000 (0:00:00.450) 0:05:10.031 ******** 2026-04-11 02:33:56.831804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 02:33:56.831810 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:56.831815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 02:33:56.831820 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:56.831825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 02:33:56.831830 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:56.831835 | orchestrator | 2026-04-11 02:33:56.831840 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-11 02:33:56.831845 | orchestrator | Saturday 11 April 2026 02:33:46 +0000 (0:00:00.691) 0:05:10.723 ******** 2026-04-11 02:33:56.831850 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:56.831856 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:56.831861 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:56.831866 | orchestrator | 2026-04-11 02:33:56.831871 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-11 02:33:56.831876 | orchestrator | Saturday 11 April 2026 02:33:47 +0000 (0:00:00.954) 0:05:11.678 ******** 2026-04-11 02:33:56.831881 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:56.831890 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:33:56.831896 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:33:56.831901 | orchestrator | 2026-04-11 02:33:56.831906 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-11 02:33:56.831911 | orchestrator | Saturday 11 April 2026 02:33:48 +0000 (0:00:01.435) 0:05:13.114 ******** 2026-04-11 02:33:56.831916 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:33:56.831921 | orchestrator | 2026-04-11 02:33:56.831927 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-11 02:33:56.831932 | orchestrator | Saturday 11 April 2026 02:33:50 +0000 (0:00:01.619) 0:05:14.734 ******** 2026-04-11 02:33:56.831946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:58.082616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:58.082700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:58.082713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:58.082818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:58.082859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 02:33:58.082874 | orchestrator | 2026-04-11 02:33:58.082887 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-11 02:33:58.082901 | orchestrator | Saturday 11 April 2026 02:33:56 +0000 (0:00:06.368) 0:05:21.102 ******** 2026-04-11 02:33:58.082916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 02:33:58.082931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 02:33:58.082955 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:33:58.082975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 02:33:58.082998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 02:34:09.202068 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:09.202166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 02:34:09.202180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 02:34:09.202211 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:09.202219 | orchestrator | 2026-04-11 02:34:09.202227 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-11 02:34:09.202235 | orchestrator | Saturday 11 April 2026 02:33:58 +0000 (0:00:01.246) 0:05:22.348 ******** 2026-04-11 02:34:09.202243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202289 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:09.202295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202324 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:09.202344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-11 02:34:09.202362 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:09.202366 | orchestrator | 2026-04-11 02:34:09.202375 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-11 02:34:09.202380 | orchestrator | Saturday 11 April 2026 02:33:59 +0000 (0:00:01.015) 0:05:23.364 ******** 2026-04-11 02:34:09.202384 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:34:09.202388 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:34:09.202392 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:34:09.202396 | orchestrator | 2026-04-11 02:34:09.202403 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-11 02:34:09.202409 | orchestrator | Saturday 11 April 2026 02:34:00 +0000 (0:00:01.338) 0:05:24.702 ******** 2026-04-11 02:34:09.202416 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:34:09.202422 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:34:09.202428 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:34:09.202435 | orchestrator | 2026-04-11 02:34:09.202442 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-11 02:34:09.202449 | orchestrator | Saturday 11 April 2026 02:34:02 +0000 (0:00:02.421) 0:05:27.124 ******** 2026-04-11 02:34:09.202457 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:09.202461 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:09.202465 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:09.202469 | orchestrator | 2026-04-11 02:34:09.202474 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-11 02:34:09.202478 | orchestrator | Saturday 11 April 2026 02:34:03 +0000 (0:00:00.717) 0:05:27.841 ******** 2026-04-11 02:34:09.202482 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:09.202486 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:09.202490 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:09.202494 | orchestrator | 2026-04-11 02:34:09.202499 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-11 02:34:09.202506 | orchestrator | Saturday 11 April 2026 02:34:03 +0000 (0:00:00.334) 0:05:28.175 ******** 2026-04-11 02:34:09.202512 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:09.202519 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:09.202525 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:09.202531 | orchestrator | 2026-04-11 02:34:09.202538 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-11 02:34:09.202544 | orchestrator | Saturday 11 April 2026 02:34:04 +0000 (0:00:00.357) 0:05:28.533 ******** 2026-04-11 02:34:09.202550 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:09.202556 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:09.202562 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:09.202568 | orchestrator | 2026-04-11 02:34:09.202575 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-11 02:34:09.202582 | orchestrator | Saturday 11 April 2026 02:34:04 +0000 (0:00:00.350) 0:05:28.883 ******** 2026-04-11 02:34:09.202590 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:09.202597 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:09.202605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:09.202613 | orchestrator | 2026-04-11 02:34:09.202618 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-11 02:34:09.202627 | orchestrator | Saturday 11 April 2026 02:34:05 +0000 (0:00:00.719) 0:05:29.603 ******** 2026-04-11 02:34:09.202632 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:09.202637 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:09.202642 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:09.202647 | orchestrator | 2026-04-11 02:34:09.202652 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-11 02:34:09.202656 | orchestrator | Saturday 11 April 2026 02:34:05 +0000 (0:00:00.581) 0:05:30.185 ******** 2026-04-11 02:34:09.202661 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:09.202667 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:09.202671 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:09.202676 | orchestrator | 2026-04-11 02:34:09.202682 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-11 02:34:09.202695 | orchestrator | Saturday 11 April 2026 02:34:06 +0000 (0:00:00.689) 0:05:30.874 ******** 2026-04-11 02:34:09.202702 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:09.202708 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:09.202715 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:09.202721 | orchestrator | 2026-04-11 02:34:09.202728 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-11 02:34:09.202753 | orchestrator | Saturday 11 April 2026 02:34:06 +0000 (0:00:00.371) 0:05:31.246 ******** 2026-04-11 02:34:09.202761 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:09.202768 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:09.202775 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:09.202782 | orchestrator | 2026-04-11 02:34:09.202789 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-11 02:34:09.202796 | orchestrator | Saturday 11 April 2026 02:34:08 +0000 (0:00:01.322) 0:05:32.568 ******** 2026-04-11 02:34:09.202803 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:09.202811 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:09.202826 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:53.371899 | orchestrator | 2026-04-11 02:34:53.372023 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-11 02:34:53.372039 | orchestrator | Saturday 11 April 2026 02:34:09 +0000 (0:00:00.900) 0:05:33.469 ******** 2026-04-11 02:34:53.372049 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:53.372059 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:53.372068 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:53.372077 | orchestrator | 2026-04-11 02:34:53.372086 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-11 02:34:53.372095 | orchestrator | Saturday 11 April 2026 02:34:10 +0000 (0:00:00.872) 0:05:34.341 ******** 2026-04-11 02:34:53.372105 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:34:53.372114 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:34:53.372123 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:34:53.372132 | orchestrator | 2026-04-11 02:34:53.372141 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-11 02:34:53.372150 | orchestrator | Saturday 11 April 2026 02:34:20 +0000 (0:00:10.053) 0:05:44.395 ******** 2026-04-11 02:34:53.372159 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:53.372167 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:53.372176 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:53.372185 | orchestrator | 2026-04-11 02:34:53.372194 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-11 02:34:53.372203 | orchestrator | Saturday 11 April 2026 02:34:21 +0000 (0:00:01.235) 0:05:45.631 ******** 2026-04-11 02:34:53.372212 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:34:53.372220 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:34:53.372229 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:34:53.372238 | orchestrator | 2026-04-11 02:34:53.372247 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-11 02:34:53.372256 | orchestrator | Saturday 11 April 2026 02:34:37 +0000 (0:00:16.085) 0:06:01.717 ******** 2026-04-11 02:34:53.372265 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:53.372274 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:53.372283 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:53.372292 | orchestrator | 2026-04-11 02:34:53.372300 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-11 02:34:53.372309 | orchestrator | Saturday 11 April 2026 02:34:38 +0000 (0:00:00.850) 0:06:02.568 ******** 2026-04-11 02:34:53.372318 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:34:53.372327 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:34:53.372336 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:34:53.372344 | orchestrator | 2026-04-11 02:34:53.372353 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-11 02:34:53.372362 | orchestrator | Saturday 11 April 2026 02:34:43 +0000 (0:00:04.806) 0:06:07.375 ******** 2026-04-11 02:34:53.372398 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:53.372408 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:53.372417 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:53.372425 | orchestrator | 2026-04-11 02:34:53.372434 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-11 02:34:53.372443 | orchestrator | Saturday 11 April 2026 02:34:43 +0000 (0:00:00.895) 0:06:08.270 ******** 2026-04-11 02:34:53.372452 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:53.372460 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:53.372469 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:53.372477 | orchestrator | 2026-04-11 02:34:53.372486 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-11 02:34:53.372495 | orchestrator | Saturday 11 April 2026 02:34:44 +0000 (0:00:00.416) 0:06:08.687 ******** 2026-04-11 02:34:53.372504 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:53.372512 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:53.372521 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:53.372530 | orchestrator | 2026-04-11 02:34:53.372538 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-11 02:34:53.372547 | orchestrator | Saturday 11 April 2026 02:34:44 +0000 (0:00:00.427) 0:06:09.114 ******** 2026-04-11 02:34:53.372556 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:53.372565 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:53.372573 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:53.372582 | orchestrator | 2026-04-11 02:34:53.372591 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-11 02:34:53.372600 | orchestrator | Saturday 11 April 2026 02:34:45 +0000 (0:00:00.446) 0:06:09.561 ******** 2026-04-11 02:34:53.372608 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:53.372629 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:53.372638 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:53.372647 | orchestrator | 2026-04-11 02:34:53.372656 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-11 02:34:53.372664 | orchestrator | Saturday 11 April 2026 02:34:46 +0000 (0:00:00.946) 0:06:10.508 ******** 2026-04-11 02:34:53.372673 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:34:53.372682 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:34:53.372691 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:34:53.372699 | orchestrator | 2026-04-11 02:34:53.372708 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-11 02:34:53.372717 | orchestrator | Saturday 11 April 2026 02:34:46 +0000 (0:00:00.465) 0:06:10.973 ******** 2026-04-11 02:34:53.372725 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:53.372734 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:53.372767 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:53.372776 | orchestrator | 2026-04-11 02:34:53.372785 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-11 02:34:53.372793 | orchestrator | Saturday 11 April 2026 02:34:51 +0000 (0:00:04.838) 0:06:15.812 ******** 2026-04-11 02:34:53.372802 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:34:53.372811 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:34:53.372819 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:34:53.372828 | orchestrator | 2026-04-11 02:34:53.372837 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:34:53.372847 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-11 02:34:53.372872 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-11 02:34:53.372882 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-11 02:34:53.372891 | orchestrator | 2026-04-11 02:34:53.372911 | orchestrator | 2026-04-11 02:34:53.372925 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:34:53.372947 | orchestrator | Saturday 11 April 2026 02:34:52 +0000 (0:00:00.906) 0:06:16.718 ******** 2026-04-11 02:34:53.372963 | orchestrator | =============================================================================== 2026-04-11 02:34:53.372977 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 16.09s 2026-04-11 02:34:53.372990 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.05s 2026-04-11 02:34:53.373004 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.37s 2026-04-11 02:34:53.373018 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.62s 2026-04-11 02:34:53.373032 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.84s 2026-04-11 02:34:53.373045 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.81s 2026-04-11 02:34:53.373059 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.57s 2026-04-11 02:34:53.373071 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.54s 2026-04-11 02:34:53.373084 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.50s 2026-04-11 02:34:53.373098 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.29s 2026-04-11 02:34:53.373112 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.27s 2026-04-11 02:34:53.373128 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.13s 2026-04-11 02:34:53.373143 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.88s 2026-04-11 02:34:53.373156 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.87s 2026-04-11 02:34:53.373171 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.73s 2026-04-11 02:34:53.373185 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.71s 2026-04-11 02:34:53.373199 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.66s 2026-04-11 02:34:53.373214 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.64s 2026-04-11 02:34:53.373230 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.62s 2026-04-11 02:34:53.373244 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.59s 2026-04-11 02:34:56.517127 | orchestrator | 2026-04-11 02:34:56 | INFO  | Task 4a28de41-1826-46c3-bda0-da671fb5f9c7 (opensearch) was prepared for execution. 2026-04-11 02:34:56.517653 | orchestrator | 2026-04-11 02:34:56 | INFO  | It takes a moment until task 4a28de41-1826-46c3-bda0-da671fb5f9c7 (opensearch) has been started and output is visible here. 2026-04-11 02:35:07.929944 | orchestrator | 2026-04-11 02:35:07.930078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:35:07.930090 | orchestrator | 2026-04-11 02:35:07.930097 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 02:35:07.930104 | orchestrator | Saturday 11 April 2026 02:35:01 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-04-11 02:35:07.930114 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:35:07.930125 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:35:07.930136 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:35:07.930142 | orchestrator | 2026-04-11 02:35:07.930149 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 02:35:07.930168 | orchestrator | Saturday 11 April 2026 02:35:01 +0000 (0:00:00.331) 0:00:00.611 ******** 2026-04-11 02:35:07.930175 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-11 02:35:07.930182 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-11 02:35:07.930188 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-11 02:35:07.930194 | orchestrator | 2026-04-11 02:35:07.930200 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-11 02:35:07.930224 | orchestrator | 2026-04-11 02:35:07.930231 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 02:35:07.930237 | orchestrator | Saturday 11 April 2026 02:35:01 +0000 (0:00:00.482) 0:00:01.093 ******** 2026-04-11 02:35:07.930243 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:35:07.930249 | orchestrator | 2026-04-11 02:35:07.930256 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-11 02:35:07.930262 | orchestrator | Saturday 11 April 2026 02:35:02 +0000 (0:00:00.549) 0:00:01.643 ******** 2026-04-11 02:35:07.930268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 02:35:07.930274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 02:35:07.930281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 02:35:07.930287 | orchestrator | 2026-04-11 02:35:07.930293 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-11 02:35:07.930299 | orchestrator | Saturday 11 April 2026 02:35:03 +0000 (0:00:00.713) 0:00:02.357 ******** 2026-04-11 02:35:07.930309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:07.930319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:07.930340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:07.930353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:07.930373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:07.930382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:07.930392 | orchestrator | 2026-04-11 02:35:07.930402 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 02:35:07.930412 | orchestrator | Saturday 11 April 2026 02:35:04 +0000 (0:00:01.799) 0:00:04.156 ******** 2026-04-11 02:35:07.930421 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:35:07.930430 | orchestrator | 2026-04-11 02:35:07.930440 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-11 02:35:07.930448 | orchestrator | Saturday 11 April 2026 02:35:05 +0000 (0:00:00.559) 0:00:04.715 ******** 2026-04-11 02:35:07.930470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:08.695638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:08.695727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:08.695777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:08.695794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:08.695860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:08.695876 | orchestrator | 2026-04-11 02:35:08.695889 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-11 02:35:08.695901 | orchestrator | Saturday 11 April 2026 02:35:07 +0000 (0:00:02.380) 0:00:07.096 ******** 2026-04-11 02:35:08.695928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:35:08.695950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:35:08.695963 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:35:08.695976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:35:08.696010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:35:09.658660 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:35:09.658789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:35:09.658818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:35:09.658834 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:35:09.658847 | orchestrator | 2026-04-11 02:35:09.658862 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-11 02:35:09.658878 | orchestrator | Saturday 11 April 2026 02:35:08 +0000 (0:00:00.769) 0:00:07.865 ******** 2026-04-11 02:35:09.658915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:35:09.658945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:35:09.658999 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:35:09.659012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:35:09.659021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:35:09.659030 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:35:09.659044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-11 02:35:09.659057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-11 02:35:09.659066 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:35:09.659074 | orchestrator | 2026-04-11 02:35:09.659082 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-11 02:35:09.659097 | orchestrator | Saturday 11 April 2026 02:35:09 +0000 (0:00:00.955) 0:00:08.821 ******** 2026-04-11 02:35:17.761929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:17.762149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:17.762172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:17.762230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:17.762266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:17.762281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:35:17.762302 | orchestrator | 2026-04-11 02:35:17.762315 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-11 02:35:17.762328 | orchestrator | Saturday 11 April 2026 02:35:11 +0000 (0:00:02.187) 0:00:11.008 ******** 2026-04-11 02:35:17.762340 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:35:17.762352 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:35:17.762363 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:35:17.762375 | orchestrator | 2026-04-11 02:35:17.762395 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-11 02:35:17.762413 | orchestrator | Saturday 11 April 2026 02:35:14 +0000 (0:00:02.385) 0:00:13.394 ******** 2026-04-11 02:35:17.762430 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:35:17.762449 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:35:17.762466 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:35:17.762483 | orchestrator | 2026-04-11 02:35:17.762502 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-11 02:35:17.762521 | orchestrator | Saturday 11 April 2026 02:35:16 +0000 (0:00:01.876) 0:00:15.271 ******** 2026-04-11 02:35:17.762541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:17.762569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:35:17.762601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-11 02:38:04.159624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:38:04.159734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:38:04.159796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-11 02:38:04.159805 | orchestrator | 2026-04-11 02:38:04.159812 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 02:38:04.159819 | orchestrator | Saturday 11 April 2026 02:35:17 +0000 (0:00:01.657) 0:00:16.928 ******** 2026-04-11 02:38:04.159824 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:38:04.159830 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:38:04.159836 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:38:04.159841 | orchestrator | 2026-04-11 02:38:04.159846 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 02:38:04.159852 | orchestrator | Saturday 11 April 2026 02:35:18 +0000 (0:00:00.338) 0:00:17.267 ******** 2026-04-11 02:38:04.159857 | orchestrator | 2026-04-11 02:38:04.159862 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 02:38:04.159867 | orchestrator | Saturday 11 April 2026 02:35:18 +0000 (0:00:00.070) 0:00:17.337 ******** 2026-04-11 02:38:04.159872 | orchestrator | 2026-04-11 02:38:04.159877 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 02:38:04.159895 | orchestrator | Saturday 11 April 2026 02:35:18 +0000 (0:00:00.071) 0:00:17.408 ******** 2026-04-11 02:38:04.159901 | orchestrator | 2026-04-11 02:38:04.159906 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-11 02:38:04.159923 | orchestrator | Saturday 11 April 2026 02:35:18 +0000 (0:00:00.068) 0:00:17.477 ******** 2026-04-11 02:38:04.159929 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:38:04.159934 | orchestrator | 2026-04-11 02:38:04.159948 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-11 02:38:04.159954 | orchestrator | Saturday 11 April 2026 02:35:18 +0000 (0:00:00.227) 0:00:17.705 ******** 2026-04-11 02:38:04.159966 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:38:04.159971 | orchestrator | 2026-04-11 02:38:04.159976 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-11 02:38:04.159981 | orchestrator | Saturday 11 April 2026 02:35:19 +0000 (0:00:00.739) 0:00:18.444 ******** 2026-04-11 02:38:04.159986 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:38:04.159992 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:38:04.159997 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:38:04.160002 | orchestrator | 2026-04-11 02:38:04.160007 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-11 02:38:04.160012 | orchestrator | Saturday 11 April 2026 02:36:26 +0000 (0:01:07.200) 0:01:25.645 ******** 2026-04-11 02:38:04.160017 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:38:04.160022 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:38:04.160027 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:38:04.160033 | orchestrator | 2026-04-11 02:38:04.160038 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 02:38:04.160043 | orchestrator | Saturday 11 April 2026 02:37:53 +0000 (0:01:27.289) 0:02:52.935 ******** 2026-04-11 02:38:04.160048 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:38:04.160054 | orchestrator | 2026-04-11 02:38:04.160059 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-11 02:38:04.160064 | orchestrator | Saturday 11 April 2026 02:37:54 +0000 (0:00:00.565) 0:02:53.500 ******** 2026-04-11 02:38:04.160069 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:38:04.160075 | orchestrator | 2026-04-11 02:38:04.160080 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-11 02:38:04.160085 | orchestrator | Saturday 11 April 2026 02:37:56 +0000 (0:00:02.529) 0:02:56.030 ******** 2026-04-11 02:38:04.160090 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:38:04.160095 | orchestrator | 2026-04-11 02:38:04.160101 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-11 02:38:04.160106 | orchestrator | Saturday 11 April 2026 02:37:59 +0000 (0:00:02.203) 0:02:58.233 ******** 2026-04-11 02:38:04.160111 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:38:04.160124 | orchestrator | 2026-04-11 02:38:04.160129 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-11 02:38:04.160134 | orchestrator | Saturday 11 April 2026 02:38:01 +0000 (0:00:02.632) 0:03:00.866 ******** 2026-04-11 02:38:04.160139 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:38:04.160144 | orchestrator | 2026-04-11 02:38:04.160149 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:38:04.160156 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 02:38:04.160162 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 02:38:04.160171 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 02:38:04.160176 | orchestrator | 2026-04-11 02:38:04.160186 | orchestrator | 2026-04-11 02:38:04.160193 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:38:04.160199 | orchestrator | Saturday 11 April 2026 02:38:04 +0000 (0:00:02.438) 0:03:03.304 ******** 2026-04-11 02:38:04.160205 | orchestrator | =============================================================================== 2026-04-11 02:38:04.160211 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 87.29s 2026-04-11 02:38:04.160217 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.20s 2026-04-11 02:38:04.160223 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.63s 2026-04-11 02:38:04.160229 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.53s 2026-04-11 02:38:04.160234 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.44s 2026-04-11 02:38:04.160240 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.39s 2026-04-11 02:38:04.160246 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.38s 2026-04-11 02:38:04.160252 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.20s 2026-04-11 02:38:04.160258 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.19s 2026-04-11 02:38:04.160264 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.88s 2026-04-11 02:38:04.160269 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.80s 2026-04-11 02:38:04.160275 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.66s 2026-04-11 02:38:04.160281 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.96s 2026-04-11 02:38:04.160287 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.77s 2026-04-11 02:38:04.160293 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.74s 2026-04-11 02:38:04.160299 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2026-04-11 02:38:04.160309 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-04-11 02:38:04.564583 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-04-11 02:38:04.564674 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-04-11 02:38:04.564686 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-04-11 02:38:07.114866 | orchestrator | 2026-04-11 02:38:07 | INFO  | Task b07c93f3-ee71-4f8c-bd62-05e5e9d06f07 (memcached) was prepared for execution. 2026-04-11 02:38:07.114938 | orchestrator | 2026-04-11 02:38:07 | INFO  | It takes a moment until task b07c93f3-ee71-4f8c-bd62-05e5e9d06f07 (memcached) has been started and output is visible here. 2026-04-11 02:38:20.211181 | orchestrator | 2026-04-11 02:38:20.211309 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:38:20.211330 | orchestrator | 2026-04-11 02:38:20.211345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 02:38:20.211359 | orchestrator | Saturday 11 April 2026 02:38:11 +0000 (0:00:00.283) 0:00:00.283 ******** 2026-04-11 02:38:20.211373 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:38:20.211386 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:38:20.211394 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:38:20.211402 | orchestrator | 2026-04-11 02:38:20.211411 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 02:38:20.211419 | orchestrator | Saturday 11 April 2026 02:38:12 +0000 (0:00:00.316) 0:00:00.599 ******** 2026-04-11 02:38:20.211428 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-11 02:38:20.211436 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-11 02:38:20.211444 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-11 02:38:20.211451 | orchestrator | 2026-04-11 02:38:20.211459 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-11 02:38:20.211489 | orchestrator | 2026-04-11 02:38:20.211497 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-11 02:38:20.211506 | orchestrator | Saturday 11 April 2026 02:38:12 +0000 (0:00:00.482) 0:00:01.082 ******** 2026-04-11 02:38:20.211514 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:38:20.211523 | orchestrator | 2026-04-11 02:38:20.211531 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-11 02:38:20.211539 | orchestrator | Saturday 11 April 2026 02:38:13 +0000 (0:00:00.553) 0:00:01.635 ******** 2026-04-11 02:38:20.211547 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-11 02:38:20.211555 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-11 02:38:20.211563 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-11 02:38:20.211571 | orchestrator | 2026-04-11 02:38:20.211579 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-11 02:38:20.211587 | orchestrator | Saturday 11 April 2026 02:38:13 +0000 (0:00:00.753) 0:00:02.388 ******** 2026-04-11 02:38:20.211594 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-11 02:38:20.211602 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-11 02:38:20.211610 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-11 02:38:20.211618 | orchestrator | 2026-04-11 02:38:20.211626 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-11 02:38:20.211634 | orchestrator | Saturday 11 April 2026 02:38:15 +0000 (0:00:01.778) 0:00:04.167 ******** 2026-04-11 02:38:20.211661 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:38:20.211673 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:38:20.211684 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:38:20.211703 | orchestrator | 2026-04-11 02:38:20.211720 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-11 02:38:20.211732 | orchestrator | Saturday 11 April 2026 02:38:17 +0000 (0:00:01.554) 0:00:05.721 ******** 2026-04-11 02:38:20.211775 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:38:20.211789 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:38:20.211801 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:38:20.211814 | orchestrator | 2026-04-11 02:38:20.211827 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:38:20.211839 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:38:20.211853 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:38:20.211865 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:38:20.211877 | orchestrator | 2026-04-11 02:38:20.211891 | orchestrator | 2026-04-11 02:38:20.211905 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:38:20.211917 | orchestrator | Saturday 11 April 2026 02:38:19 +0000 (0:00:02.539) 0:00:08.261 ******** 2026-04-11 02:38:20.211930 | orchestrator | =============================================================================== 2026-04-11 02:38:20.211944 | orchestrator | memcached : Restart memcached container --------------------------------- 2.54s 2026-04-11 02:38:20.211958 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.78s 2026-04-11 02:38:20.211972 | orchestrator | memcached : Check memcached container ----------------------------------- 1.55s 2026-04-11 02:38:20.211985 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.75s 2026-04-11 02:38:20.211998 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.55s 2026-04-11 02:38:20.212013 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-04-11 02:38:20.212040 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-11 02:38:22.855591 | orchestrator | 2026-04-11 02:38:22 | INFO  | Task 4d7a4533-db63-4d4b-9873-d431f4397919 (redis) was prepared for execution. 2026-04-11 02:38:22.855693 | orchestrator | 2026-04-11 02:38:22 | INFO  | It takes a moment until task 4d7a4533-db63-4d4b-9873-d431f4397919 (redis) has been started and output is visible here. 2026-04-11 02:38:32.838451 | orchestrator | 2026-04-11 02:38:32.838561 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:38:32.838574 | orchestrator | 2026-04-11 02:38:32.838582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 02:38:32.838591 | orchestrator | Saturday 11 April 2026 02:38:27 +0000 (0:00:00.286) 0:00:00.286 ******** 2026-04-11 02:38:32.838604 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:38:32.838617 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:38:32.838629 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:38:32.838641 | orchestrator | 2026-04-11 02:38:32.838653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 02:38:32.838665 | orchestrator | Saturday 11 April 2026 02:38:27 +0000 (0:00:00.333) 0:00:00.620 ******** 2026-04-11 02:38:32.838679 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-11 02:38:32.838691 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-11 02:38:32.838703 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-11 02:38:32.838715 | orchestrator | 2026-04-11 02:38:32.838727 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-11 02:38:32.838738 | orchestrator | 2026-04-11 02:38:32.838777 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-11 02:38:32.838789 | orchestrator | Saturday 11 April 2026 02:38:28 +0000 (0:00:00.454) 0:00:01.075 ******** 2026-04-11 02:38:32.838801 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:38:32.838815 | orchestrator | 2026-04-11 02:38:32.838828 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-11 02:38:32.838841 | orchestrator | Saturday 11 April 2026 02:38:28 +0000 (0:00:00.545) 0:00:01.620 ******** 2026-04-11 02:38:32.838857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.838877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.838891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.838934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.838967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.838976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.838985 | orchestrator | 2026-04-11 02:38:32.838994 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-11 02:38:32.839002 | orchestrator | Saturday 11 April 2026 02:38:30 +0000 (0:00:01.255) 0:00:02.875 ******** 2026-04-11 02:38:32.839011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.839059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.839069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.839084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:32.839098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.076868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.076970 | orchestrator | 2026-04-11 02:38:37.077016 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-11 02:38:37.077034 | orchestrator | Saturday 11 April 2026 02:38:32 +0000 (0:00:02.739) 0:00:05.615 ******** 2026-04-11 02:38:37.077050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077185 | orchestrator | 2026-04-11 02:38:37.077198 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-11 02:38:37.077210 | orchestrator | Saturday 11 April 2026 02:38:35 +0000 (0:00:02.541) 0:00:08.156 ******** 2026-04-11 02:38:37.077225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:37.077322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 02:38:43.855196 | orchestrator | 2026-04-11 02:38:43.855339 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 02:38:43.855366 | orchestrator | Saturday 11 April 2026 02:38:36 +0000 (0:00:01.489) 0:00:09.646 ******** 2026-04-11 02:38:43.855385 | orchestrator | 2026-04-11 02:38:43.855404 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 02:38:43.855423 | orchestrator | Saturday 11 April 2026 02:38:36 +0000 (0:00:00.065) 0:00:09.711 ******** 2026-04-11 02:38:43.855440 | orchestrator | 2026-04-11 02:38:43.855457 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 02:38:43.855474 | orchestrator | Saturday 11 April 2026 02:38:36 +0000 (0:00:00.069) 0:00:09.781 ******** 2026-04-11 02:38:43.855493 | orchestrator | 2026-04-11 02:38:43.855511 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-11 02:38:43.855529 | orchestrator | Saturday 11 April 2026 02:38:37 +0000 (0:00:00.073) 0:00:09.854 ******** 2026-04-11 02:38:43.855547 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:38:43.855566 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:38:43.855583 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:38:43.855617 | orchestrator | 2026-04-11 02:38:43.855636 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-11 02:38:43.855654 | orchestrator | Saturday 11 April 2026 02:38:40 +0000 (0:00:03.051) 0:00:12.905 ******** 2026-04-11 02:38:43.855712 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:38:43.855733 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:38:43.855895 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:38:43.855916 | orchestrator | 2026-04-11 02:38:43.855937 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:38:43.855957 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:38:43.855978 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:38:43.856018 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:38:43.856037 | orchestrator | 2026-04-11 02:38:43.856056 | orchestrator | 2026-04-11 02:38:43.856074 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:38:43.856092 | orchestrator | Saturday 11 April 2026 02:38:43 +0000 (0:00:03.356) 0:00:16.261 ******** 2026-04-11 02:38:43.856108 | orchestrator | =============================================================================== 2026-04-11 02:38:43.856125 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.36s 2026-04-11 02:38:43.856143 | orchestrator | redis : Restart redis container ----------------------------------------- 3.05s 2026-04-11 02:38:43.856162 | orchestrator | redis : Copying over default config.json files -------------------------- 2.74s 2026-04-11 02:38:43.856182 | orchestrator | redis : Copying over redis config files --------------------------------- 2.54s 2026-04-11 02:38:43.856202 | orchestrator | redis : Check redis containers ------------------------------------------ 1.49s 2026-04-11 02:38:43.856221 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.26s 2026-04-11 02:38:43.856239 | orchestrator | redis : include_tasks --------------------------------------------------- 0.55s 2026-04-11 02:38:43.856258 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-04-11 02:38:43.856278 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-11 02:38:43.856298 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2026-04-11 02:38:46.556563 | orchestrator | 2026-04-11 02:38:46 | INFO  | Task d9744781-164e-4be8-9761-bb3016ebd641 (mariadb) was prepared for execution. 2026-04-11 02:38:46.556633 | orchestrator | 2026-04-11 02:38:46 | INFO  | It takes a moment until task d9744781-164e-4be8-9761-bb3016ebd641 (mariadb) has been started and output is visible here. 2026-04-11 02:39:01.275431 | orchestrator | 2026-04-11 02:39:01.275565 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:39:01.275587 | orchestrator | 2026-04-11 02:39:01.275603 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 02:39:01.275618 | orchestrator | Saturday 11 April 2026 02:38:51 +0000 (0:00:00.181) 0:00:00.181 ******** 2026-04-11 02:39:01.275634 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:39:01.275649 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:39:01.275666 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:39:01.275681 | orchestrator | 2026-04-11 02:39:01.275696 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 02:39:01.275712 | orchestrator | Saturday 11 April 2026 02:38:51 +0000 (0:00:00.332) 0:00:00.513 ******** 2026-04-11 02:39:01.275727 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-11 02:39:01.275811 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-11 02:39:01.275830 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-11 02:39:01.275845 | orchestrator | 2026-04-11 02:39:01.275860 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-11 02:39:01.275876 | orchestrator | 2026-04-11 02:39:01.275892 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-11 02:39:01.275941 | orchestrator | Saturday 11 April 2026 02:38:52 +0000 (0:00:00.641) 0:00:01.154 ******** 2026-04-11 02:39:01.275960 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 02:39:01.275980 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 02:39:01.275996 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 02:39:01.276011 | orchestrator | 2026-04-11 02:39:01.276027 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 02:39:01.276043 | orchestrator | Saturday 11 April 2026 02:38:52 +0000 (0:00:00.384) 0:00:01.539 ******** 2026-04-11 02:39:01.276060 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:39:01.276077 | orchestrator | 2026-04-11 02:39:01.276092 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-11 02:39:01.276107 | orchestrator | Saturday 11 April 2026 02:38:53 +0000 (0:00:00.597) 0:00:02.137 ******** 2026-04-11 02:39:01.276148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:39:01.276196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:39:01.276238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:39:01.276258 | orchestrator | 2026-04-11 02:39:01.276276 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-11 02:39:01.276293 | orchestrator | Saturday 11 April 2026 02:38:55 +0000 (0:00:02.818) 0:00:04.955 ******** 2026-04-11 02:39:01.276306 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:39:01.276322 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:39:01.276335 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:39:01.276351 | orchestrator | 2026-04-11 02:39:01.276367 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-11 02:39:01.276384 | orchestrator | Saturday 11 April 2026 02:38:56 +0000 (0:00:00.688) 0:00:05.644 ******** 2026-04-11 02:39:01.276398 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:39:01.276414 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:39:01.276428 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:39:01.276443 | orchestrator | 2026-04-11 02:39:01.276457 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-11 02:39:01.276472 | orchestrator | Saturday 11 April 2026 02:38:58 +0000 (0:00:01.498) 0:00:07.143 ******** 2026-04-11 02:39:01.276503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:39:09.581804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:39:09.581913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:39:09.581958 | orchestrator | 2026-04-11 02:39:09.581971 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-11 02:39:09.581979 | orchestrator | Saturday 11 April 2026 02:39:01 +0000 (0:00:03.204) 0:00:10.348 ******** 2026-04-11 02:39:09.581987 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:39:09.581996 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:39:09.582007 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:39:09.582110 | orchestrator | 2026-04-11 02:39:09.582124 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-11 02:39:09.582155 | orchestrator | Saturday 11 April 2026 02:39:02 +0000 (0:00:01.122) 0:00:11.470 ******** 2026-04-11 02:39:09.582166 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:39:09.582175 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:39:09.582185 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:39:09.582196 | orchestrator | 2026-04-11 02:39:09.582207 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 02:39:09.582217 | orchestrator | Saturday 11 April 2026 02:39:06 +0000 (0:00:04.199) 0:00:15.669 ******** 2026-04-11 02:39:09.582227 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:39:09.582238 | orchestrator | 2026-04-11 02:39:09.582249 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-11 02:39:09.582260 | orchestrator | Saturday 11 April 2026 02:39:07 +0000 (0:00:00.595) 0:00:16.264 ******** 2026-04-11 02:39:09.582281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:09.582307 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:39:09.582331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:14.666913 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:39:14.667036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:14.667069 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:39:14.667076 | orchestrator | 2026-04-11 02:39:14.667082 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-11 02:39:14.667090 | orchestrator | Saturday 11 April 2026 02:39:09 +0000 (0:00:02.386) 0:00:18.651 ******** 2026-04-11 02:39:14.667098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:14.667106 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:39:14.667189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:14.667206 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:39:14.667212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:14.667218 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:39:14.667223 | orchestrator | 2026-04-11 02:39:14.667229 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-11 02:39:14.667234 | orchestrator | Saturday 11 April 2026 02:39:12 +0000 (0:00:02.648) 0:00:21.300 ******** 2026-04-11 02:39:14.667251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:17.586324 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:39:17.587311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:17.587353 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:39:17.587383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 02:39:17.587416 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:39:17.587428 | orchestrator | 2026-04-11 02:39:17.587441 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-11 02:39:17.587454 | orchestrator | Saturday 11 April 2026 02:39:14 +0000 (0:00:02.444) 0:00:23.744 ******** 2026-04-11 02:39:17.587490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:39:17.587505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:39:17.587534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 02:41:38.686442 | orchestrator | 2026-04-11 02:41:38.686592 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-11 02:41:38.686622 | orchestrator | Saturday 11 April 2026 02:39:17 +0000 (0:00:02.914) 0:00:26.659 ******** 2026-04-11 02:41:38.686642 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:38.686661 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:41:38.686679 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:41:38.686697 | orchestrator | 2026-04-11 02:41:38.686716 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-11 02:41:38.686736 | orchestrator | Saturday 11 April 2026 02:39:18 +0000 (0:00:00.831) 0:00:27.490 ******** 2026-04-11 02:41:38.686789 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.686810 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:41:38.686828 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:41:38.686846 | orchestrator | 2026-04-11 02:41:38.686865 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-11 02:41:38.686884 | orchestrator | Saturday 11 April 2026 02:39:18 +0000 (0:00:00.560) 0:00:28.051 ******** 2026-04-11 02:41:38.686901 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.686919 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:41:38.686938 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:41:38.686958 | orchestrator | 2026-04-11 02:41:38.686971 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-11 02:41:38.686984 | orchestrator | Saturday 11 April 2026 02:39:19 +0000 (0:00:00.367) 0:00:28.418 ******** 2026-04-11 02:41:38.686997 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-11 02:41:38.687011 | orchestrator | ...ignoring 2026-04-11 02:41:38.687025 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-11 02:41:38.687037 | orchestrator | ...ignoring 2026-04-11 02:41:38.687050 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-11 02:41:38.687062 | orchestrator | ...ignoring 2026-04-11 02:41:38.687102 | orchestrator | 2026-04-11 02:41:38.687116 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-11 02:41:38.687128 | orchestrator | Saturday 11 April 2026 02:39:30 +0000 (0:00:10.881) 0:00:39.300 ******** 2026-04-11 02:41:38.687141 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.687154 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:41:38.687167 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:41:38.687179 | orchestrator | 2026-04-11 02:41:38.687192 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-11 02:41:38.687204 | orchestrator | Saturday 11 April 2026 02:39:30 +0000 (0:00:00.453) 0:00:39.753 ******** 2026-04-11 02:41:38.687217 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:41:38.687230 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:38.687242 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:38.687255 | orchestrator | 2026-04-11 02:41:38.687268 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-11 02:41:38.687280 | orchestrator | Saturday 11 April 2026 02:39:31 +0000 (0:00:00.715) 0:00:40.469 ******** 2026-04-11 02:41:38.687294 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:41:38.687306 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:38.687316 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:38.687327 | orchestrator | 2026-04-11 02:41:38.687354 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-11 02:41:38.687366 | orchestrator | Saturday 11 April 2026 02:39:31 +0000 (0:00:00.431) 0:00:40.901 ******** 2026-04-11 02:41:38.687378 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:41:38.687388 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:38.687399 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:38.687410 | orchestrator | 2026-04-11 02:41:38.687421 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-11 02:41:38.687432 | orchestrator | Saturday 11 April 2026 02:39:32 +0000 (0:00:00.454) 0:00:41.356 ******** 2026-04-11 02:41:38.687443 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.687455 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:41:38.687473 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:41:38.687491 | orchestrator | 2026-04-11 02:41:38.687510 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-11 02:41:38.687528 | orchestrator | Saturday 11 April 2026 02:39:32 +0000 (0:00:00.466) 0:00:41.822 ******** 2026-04-11 02:41:38.687545 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:41:38.687563 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:38.687582 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:38.687598 | orchestrator | 2026-04-11 02:41:38.687615 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 02:41:38.687633 | orchestrator | Saturday 11 April 2026 02:39:33 +0000 (0:00:00.944) 0:00:42.767 ******** 2026-04-11 02:41:38.687651 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:38.687671 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:38.687691 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-11 02:41:38.687709 | orchestrator | 2026-04-11 02:41:38.687727 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-11 02:41:38.687740 | orchestrator | Saturday 11 April 2026 02:39:34 +0000 (0:00:00.444) 0:00:43.211 ******** 2026-04-11 02:41:38.687777 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:38.687789 | orchestrator | 2026-04-11 02:41:38.687799 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-11 02:41:38.687810 | orchestrator | Saturday 11 April 2026 02:39:44 +0000 (0:00:10.601) 0:00:53.813 ******** 2026-04-11 02:41:38.687821 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.687832 | orchestrator | 2026-04-11 02:41:38.687842 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 02:41:38.687854 | orchestrator | Saturday 11 April 2026 02:39:44 +0000 (0:00:00.148) 0:00:53.961 ******** 2026-04-11 02:41:38.687865 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:41:38.687914 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:38.687955 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:38.687988 | orchestrator | 2026-04-11 02:41:38.688008 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-11 02:41:38.688027 | orchestrator | Saturday 11 April 2026 02:39:46 +0000 (0:00:01.151) 0:00:55.113 ******** 2026-04-11 02:41:38.688045 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:38.688064 | orchestrator | 2026-04-11 02:41:38.688082 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-11 02:41:38.688100 | orchestrator | Saturday 11 April 2026 02:39:54 +0000 (0:00:08.452) 0:01:03.565 ******** 2026-04-11 02:41:38.688118 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.688135 | orchestrator | 2026-04-11 02:41:38.688153 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-11 02:41:38.688172 | orchestrator | Saturday 11 April 2026 02:39:56 +0000 (0:00:01.605) 0:01:05.171 ******** 2026-04-11 02:41:38.688191 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.688210 | orchestrator | 2026-04-11 02:41:38.688228 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-11 02:41:38.688247 | orchestrator | Saturday 11 April 2026 02:39:58 +0000 (0:00:02.687) 0:01:07.858 ******** 2026-04-11 02:41:38.688266 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:38.688285 | orchestrator | 2026-04-11 02:41:38.688303 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-11 02:41:38.688321 | orchestrator | Saturday 11 April 2026 02:39:58 +0000 (0:00:00.128) 0:01:07.987 ******** 2026-04-11 02:41:38.688340 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:41:38.688359 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:38.688377 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:38.688395 | orchestrator | 2026-04-11 02:41:38.688413 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-11 02:41:38.688430 | orchestrator | Saturday 11 April 2026 02:39:59 +0000 (0:00:00.347) 0:01:08.334 ******** 2026-04-11 02:41:38.688449 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:41:38.688467 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-11 02:41:38.688484 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:41:38.688500 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:41:38.688517 | orchestrator | 2026-04-11 02:41:38.688535 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-11 02:41:38.688552 | orchestrator | skipping: no hosts matched 2026-04-11 02:41:38.688571 | orchestrator | 2026-04-11 02:41:38.688590 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-11 02:41:38.688607 | orchestrator | 2026-04-11 02:41:38.688625 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-11 02:41:38.688643 | orchestrator | Saturday 11 April 2026 02:39:59 +0000 (0:00:00.582) 0:01:08.917 ******** 2026-04-11 02:41:38.688662 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:41:38.688680 | orchestrator | 2026-04-11 02:41:38.688698 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-11 02:41:38.688716 | orchestrator | Saturday 11 April 2026 02:40:19 +0000 (0:00:19.275) 0:01:28.193 ******** 2026-04-11 02:41:38.688734 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:41:38.688799 | orchestrator | 2026-04-11 02:41:38.688820 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-11 02:41:38.688838 | orchestrator | Saturday 11 April 2026 02:40:35 +0000 (0:00:16.609) 0:01:44.802 ******** 2026-04-11 02:41:38.688857 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:41:38.688877 | orchestrator | 2026-04-11 02:41:38.688901 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-11 02:41:38.688920 | orchestrator | 2026-04-11 02:41:38.688954 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-11 02:41:38.688973 | orchestrator | Saturday 11 April 2026 02:40:38 +0000 (0:00:02.660) 0:01:47.463 ******** 2026-04-11 02:41:38.689008 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:41:38.689027 | orchestrator | 2026-04-11 02:41:38.689046 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-11 02:41:38.689064 | orchestrator | Saturday 11 April 2026 02:40:57 +0000 (0:00:19.041) 0:02:06.505 ******** 2026-04-11 02:41:38.689083 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:41:38.689102 | orchestrator | 2026-04-11 02:41:38.689122 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-11 02:41:38.689140 | orchestrator | Saturday 11 April 2026 02:41:14 +0000 (0:00:16.583) 0:02:23.089 ******** 2026-04-11 02:41:38.689156 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:41:38.689167 | orchestrator | 2026-04-11 02:41:38.689178 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-11 02:41:38.689189 | orchestrator | 2026-04-11 02:41:38.689199 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-11 02:41:38.689210 | orchestrator | Saturday 11 April 2026 02:41:16 +0000 (0:00:02.660) 0:02:25.750 ******** 2026-04-11 02:41:38.689221 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:38.689231 | orchestrator | 2026-04-11 02:41:38.689242 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-11 02:41:38.689253 | orchestrator | Saturday 11 April 2026 02:41:29 +0000 (0:00:12.935) 0:02:38.686 ******** 2026-04-11 02:41:38.689269 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.689288 | orchestrator | 2026-04-11 02:41:38.689306 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-11 02:41:38.689324 | orchestrator | Saturday 11 April 2026 02:41:35 +0000 (0:00:05.570) 0:02:44.256 ******** 2026-04-11 02:41:38.689342 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:38.689358 | orchestrator | 2026-04-11 02:41:38.689373 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-11 02:41:38.689392 | orchestrator | 2026-04-11 02:41:38.689411 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-11 02:41:38.689429 | orchestrator | Saturday 11 April 2026 02:41:37 +0000 (0:00:02.735) 0:02:46.992 ******** 2026-04-11 02:41:38.689448 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:41:38.689468 | orchestrator | 2026-04-11 02:41:38.689487 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-11 02:41:38.689524 | orchestrator | Saturday 11 April 2026 02:41:38 +0000 (0:00:00.762) 0:02:47.754 ******** 2026-04-11 02:41:51.440995 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:51.441103 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:51.441113 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:51.441119 | orchestrator | 2026-04-11 02:41:51.441125 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-11 02:41:51.441131 | orchestrator | Saturday 11 April 2026 02:41:40 +0000 (0:00:02.230) 0:02:49.985 ******** 2026-04-11 02:41:51.441135 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:51.441140 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:51.441144 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:51.441149 | orchestrator | 2026-04-11 02:41:51.441153 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-11 02:41:51.441157 | orchestrator | Saturday 11 April 2026 02:41:43 +0000 (0:00:02.099) 0:02:52.085 ******** 2026-04-11 02:41:51.441162 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:51.441166 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:51.441171 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:51.441175 | orchestrator | 2026-04-11 02:41:51.441179 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-11 02:41:51.441183 | orchestrator | Saturday 11 April 2026 02:41:45 +0000 (0:00:02.330) 0:02:54.415 ******** 2026-04-11 02:41:51.441187 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:51.441192 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:51.441196 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:41:51.441221 | orchestrator | 2026-04-11 02:41:51.441226 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-11 02:41:51.441230 | orchestrator | Saturday 11 April 2026 02:41:47 +0000 (0:00:02.119) 0:02:56.535 ******** 2026-04-11 02:41:51.441234 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:41:51.441240 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:41:51.441244 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:41:51.441248 | orchestrator | 2026-04-11 02:41:51.441252 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-11 02:41:51.441256 | orchestrator | Saturday 11 April 2026 02:41:50 +0000 (0:00:03.106) 0:02:59.641 ******** 2026-04-11 02:41:51.441260 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:41:51.441265 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:41:51.441269 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:41:51.441273 | orchestrator | 2026-04-11 02:41:51.441277 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:41:51.441283 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-11 02:41:51.441289 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-11 02:41:51.441293 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-11 02:41:51.441297 | orchestrator | 2026-04-11 02:41:51.441313 | orchestrator | 2026-04-11 02:41:51.441317 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:41:51.441321 | orchestrator | Saturday 11 April 2026 02:41:51 +0000 (0:00:00.460) 0:03:00.102 ******** 2026-04-11 02:41:51.441325 | orchestrator | =============================================================================== 2026-04-11 02:41:51.441340 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.32s 2026-04-11 02:41:51.441345 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.19s 2026-04-11 02:41:51.441349 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.94s 2026-04-11 02:41:51.441353 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.88s 2026-04-11 02:41:51.441357 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.60s 2026-04-11 02:41:51.441361 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.45s 2026-04-11 02:41:51.441366 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.57s 2026-04-11 02:41:51.441370 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.32s 2026-04-11 02:41:51.441374 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.20s 2026-04-11 02:41:51.441378 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.20s 2026-04-11 02:41:51.441382 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.11s 2026-04-11 02:41:51.441386 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.91s 2026-04-11 02:41:51.441390 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.82s 2026-04-11 02:41:51.441395 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.74s 2026-04-11 02:41:51.441399 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.69s 2026-04-11 02:41:51.441403 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.65s 2026-04-11 02:41:51.441408 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.44s 2026-04-11 02:41:51.441412 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.39s 2026-04-11 02:41:51.441416 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.33s 2026-04-11 02:41:51.441424 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.23s 2026-04-11 02:41:54.151700 | orchestrator | 2026-04-11 02:41:54 | INFO  | Task 20944ffe-44d2-4556-9466-d4d0ff132a6f (rabbitmq) was prepared for execution. 2026-04-11 02:41:54.151839 | orchestrator | 2026-04-11 02:41:54 | INFO  | It takes a moment until task 20944ffe-44d2-4556-9466-d4d0ff132a6f (rabbitmq) has been started and output is visible here. 2026-04-11 02:42:08.332259 | orchestrator | 2026-04-11 02:42:08.332354 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:42:08.332367 | orchestrator | 2026-04-11 02:42:08.332374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 02:42:08.332381 | orchestrator | Saturday 11 April 2026 02:41:58 +0000 (0:00:00.191) 0:00:00.191 ******** 2026-04-11 02:42:08.332388 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:42:08.332397 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:42:08.332401 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:42:08.332405 | orchestrator | 2026-04-11 02:42:08.332409 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 02:42:08.332413 | orchestrator | Saturday 11 April 2026 02:41:59 +0000 (0:00:00.326) 0:00:00.518 ******** 2026-04-11 02:42:08.332418 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-11 02:42:08.332422 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-11 02:42:08.332426 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-11 02:42:08.332430 | orchestrator | 2026-04-11 02:42:08.332434 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-11 02:42:08.332438 | orchestrator | 2026-04-11 02:42:08.332442 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 02:42:08.332446 | orchestrator | Saturday 11 April 2026 02:41:59 +0000 (0:00:00.590) 0:00:01.108 ******** 2026-04-11 02:42:08.332451 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:42:08.332456 | orchestrator | 2026-04-11 02:42:08.332459 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-11 02:42:08.332463 | orchestrator | Saturday 11 April 2026 02:42:00 +0000 (0:00:00.566) 0:00:01.675 ******** 2026-04-11 02:42:08.332467 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:42:08.332471 | orchestrator | 2026-04-11 02:42:08.332475 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-11 02:42:08.332479 | orchestrator | Saturday 11 April 2026 02:42:01 +0000 (0:00:01.047) 0:00:02.722 ******** 2026-04-11 02:42:08.332483 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:42:08.332488 | orchestrator | 2026-04-11 02:42:08.332492 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-11 02:42:08.332496 | orchestrator | Saturday 11 April 2026 02:42:01 +0000 (0:00:00.398) 0:00:03.121 ******** 2026-04-11 02:42:08.332499 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:42:08.332503 | orchestrator | 2026-04-11 02:42:08.332507 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-11 02:42:08.332511 | orchestrator | Saturday 11 April 2026 02:42:02 +0000 (0:00:00.405) 0:00:03.526 ******** 2026-04-11 02:42:08.332514 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:42:08.332518 | orchestrator | 2026-04-11 02:42:08.332522 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-11 02:42:08.332526 | orchestrator | Saturday 11 April 2026 02:42:02 +0000 (0:00:00.397) 0:00:03.924 ******** 2026-04-11 02:42:08.332530 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:42:08.332533 | orchestrator | 2026-04-11 02:42:08.332537 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 02:42:08.332552 | orchestrator | Saturday 11 April 2026 02:42:03 +0000 (0:00:00.613) 0:00:04.538 ******** 2026-04-11 02:42:08.332556 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:42:08.332576 | orchestrator | 2026-04-11 02:42:08.332581 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-11 02:42:08.332584 | orchestrator | Saturday 11 April 2026 02:42:04 +0000 (0:00:00.950) 0:00:05.488 ******** 2026-04-11 02:42:08.332588 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:42:08.332592 | orchestrator | 2026-04-11 02:42:08.332596 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-11 02:42:08.332600 | orchestrator | Saturday 11 April 2026 02:42:04 +0000 (0:00:00.853) 0:00:06.342 ******** 2026-04-11 02:42:08.332603 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:42:08.332607 | orchestrator | 2026-04-11 02:42:08.332611 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-11 02:42:08.332615 | orchestrator | Saturday 11 April 2026 02:42:05 +0000 (0:00:00.393) 0:00:06.735 ******** 2026-04-11 02:42:08.332618 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:42:08.332622 | orchestrator | 2026-04-11 02:42:08.332626 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-11 02:42:08.332630 | orchestrator | Saturday 11 April 2026 02:42:05 +0000 (0:00:00.401) 0:00:07.137 ******** 2026-04-11 02:42:08.332650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:08.332657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:08.332666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:08.332674 | orchestrator | 2026-04-11 02:42:08.332678 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-11 02:42:08.332682 | orchestrator | Saturday 11 April 2026 02:42:06 +0000 (0:00:00.872) 0:00:08.010 ******** 2026-04-11 02:42:08.332686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:08.332696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:28.075441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:28.075534 | orchestrator | 2026-04-11 02:42:28.075543 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-11 02:42:28.075568 | orchestrator | Saturday 11 April 2026 02:42:08 +0000 (0:00:01.679) 0:00:09.689 ******** 2026-04-11 02:42:28.075573 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 02:42:28.075578 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 02:42:28.075583 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 02:42:28.075587 | orchestrator | 2026-04-11 02:42:28.075592 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-11 02:42:28.075596 | orchestrator | Saturday 11 April 2026 02:42:09 +0000 (0:00:01.575) 0:00:11.265 ******** 2026-04-11 02:42:28.075612 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 02:42:28.075617 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 02:42:28.075621 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 02:42:28.075626 | orchestrator | 2026-04-11 02:42:28.075630 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-11 02:42:28.075634 | orchestrator | Saturday 11 April 2026 02:42:11 +0000 (0:00:01.790) 0:00:13.056 ******** 2026-04-11 02:42:28.075639 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 02:42:28.075643 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 02:42:28.075648 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 02:42:28.075652 | orchestrator | 2026-04-11 02:42:28.075656 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-11 02:42:28.075661 | orchestrator | Saturday 11 April 2026 02:42:13 +0000 (0:00:01.414) 0:00:14.471 ******** 2026-04-11 02:42:28.075665 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 02:42:28.075669 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 02:42:28.075673 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 02:42:28.075678 | orchestrator | 2026-04-11 02:42:28.075682 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-11 02:42:28.075686 | orchestrator | Saturday 11 April 2026 02:42:14 +0000 (0:00:01.717) 0:00:16.188 ******** 2026-04-11 02:42:28.075691 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 02:42:28.075695 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 02:42:28.075700 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 02:42:28.075704 | orchestrator | 2026-04-11 02:42:28.075708 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-11 02:42:28.075713 | orchestrator | Saturday 11 April 2026 02:42:16 +0000 (0:00:01.468) 0:00:17.657 ******** 2026-04-11 02:42:28.075717 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 02:42:28.075722 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 02:42:28.075726 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 02:42:28.075731 | orchestrator | 2026-04-11 02:42:28.075735 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 02:42:28.075739 | orchestrator | Saturday 11 April 2026 02:42:17 +0000 (0:00:01.479) 0:00:19.137 ******** 2026-04-11 02:42:28.075744 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:42:28.075780 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:42:28.075798 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:42:28.075807 | orchestrator | 2026-04-11 02:42:28.075812 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-11 02:42:28.075816 | orchestrator | Saturday 11 April 2026 02:42:18 +0000 (0:00:00.425) 0:00:19.563 ******** 2026-04-11 02:42:28.075821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:28.075830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:28.075836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 02:42:28.075841 | orchestrator | 2026-04-11 02:42:28.075845 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-11 02:42:28.075850 | orchestrator | Saturday 11 April 2026 02:42:19 +0000 (0:00:01.255) 0:00:20.819 ******** 2026-04-11 02:42:28.075854 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:42:28.075859 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:42:28.075864 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:42:28.075872 | orchestrator | 2026-04-11 02:42:28.075880 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-11 02:42:28.075892 | orchestrator | Saturday 11 April 2026 02:42:20 +0000 (0:00:00.937) 0:00:21.757 ******** 2026-04-11 02:42:28.075899 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:42:28.075907 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:42:28.075914 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:42:28.075921 | orchestrator | 2026-04-11 02:42:28.075928 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-11 02:42:28.075940 | orchestrator | Saturday 11 April 2026 02:42:28 +0000 (0:00:07.676) 0:00:29.433 ******** 2026-04-11 02:43:58.134726 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:43:58.134811 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:43:58.134821 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:43:58.134828 | orchestrator | 2026-04-11 02:43:58.134836 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-11 02:43:58.134843 | orchestrator | 2026-04-11 02:43:58.134850 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-11 02:43:58.134856 | orchestrator | Saturday 11 April 2026 02:42:28 +0000 (0:00:00.565) 0:00:29.999 ******** 2026-04-11 02:43:58.134863 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:43:58.134869 | orchestrator | 2026-04-11 02:43:58.134875 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-11 02:43:58.134881 | orchestrator | Saturday 11 April 2026 02:42:29 +0000 (0:00:00.596) 0:00:30.596 ******** 2026-04-11 02:43:58.134887 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:43:58.134893 | orchestrator | 2026-04-11 02:43:58.134899 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-11 02:43:58.134905 | orchestrator | Saturday 11 April 2026 02:42:29 +0000 (0:00:00.241) 0:00:30.838 ******** 2026-04-11 02:43:58.134911 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:43:58.134917 | orchestrator | 2026-04-11 02:43:58.134923 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-11 02:43:58.134929 | orchestrator | Saturday 11 April 2026 02:42:31 +0000 (0:00:01.813) 0:00:32.651 ******** 2026-04-11 02:43:58.134934 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:43:58.134976 | orchestrator | 2026-04-11 02:43:58.134982 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-11 02:43:58.134988 | orchestrator | 2026-04-11 02:43:58.134994 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-11 02:43:58.135000 | orchestrator | Saturday 11 April 2026 02:43:24 +0000 (0:00:53.241) 0:01:25.893 ******** 2026-04-11 02:43:58.135006 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:43:58.135012 | orchestrator | 2026-04-11 02:43:58.135018 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-11 02:43:58.135023 | orchestrator | Saturday 11 April 2026 02:43:25 +0000 (0:00:00.600) 0:01:26.493 ******** 2026-04-11 02:43:58.135029 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:43:58.135035 | orchestrator | 2026-04-11 02:43:58.135041 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-11 02:43:58.135046 | orchestrator | Saturday 11 April 2026 02:43:25 +0000 (0:00:00.244) 0:01:26.737 ******** 2026-04-11 02:43:58.135052 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:43:58.135058 | orchestrator | 2026-04-11 02:43:58.135064 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-11 02:43:58.135083 | orchestrator | Saturday 11 April 2026 02:43:26 +0000 (0:00:01.633) 0:01:28.371 ******** 2026-04-11 02:43:58.135089 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:43:58.135094 | orchestrator | 2026-04-11 02:43:58.135100 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-11 02:43:58.135106 | orchestrator | 2026-04-11 02:43:58.135112 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-11 02:43:58.135118 | orchestrator | Saturday 11 April 2026 02:43:40 +0000 (0:00:13.497) 0:01:41.869 ******** 2026-04-11 02:43:58.135123 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:43:58.135129 | orchestrator | 2026-04-11 02:43:58.135150 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-11 02:43:58.135157 | orchestrator | Saturday 11 April 2026 02:43:41 +0000 (0:00:00.807) 0:01:42.676 ******** 2026-04-11 02:43:58.135162 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:43:58.135168 | orchestrator | 2026-04-11 02:43:58.135174 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-11 02:43:58.135180 | orchestrator | Saturday 11 April 2026 02:43:41 +0000 (0:00:00.251) 0:01:42.928 ******** 2026-04-11 02:43:58.135186 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:43:58.135192 | orchestrator | 2026-04-11 02:43:58.135198 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-11 02:43:58.135203 | orchestrator | Saturday 11 April 2026 02:43:43 +0000 (0:00:01.613) 0:01:44.542 ******** 2026-04-11 02:43:58.135209 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:43:58.135215 | orchestrator | 2026-04-11 02:43:58.135221 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-11 02:43:58.135226 | orchestrator | 2026-04-11 02:43:58.135232 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-11 02:43:58.135238 | orchestrator | Saturday 11 April 2026 02:43:55 +0000 (0:00:11.937) 0:01:56.479 ******** 2026-04-11 02:43:58.135243 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:43:58.135249 | orchestrator | 2026-04-11 02:43:58.135255 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-11 02:43:58.135261 | orchestrator | Saturday 11 April 2026 02:43:55 +0000 (0:00:00.520) 0:01:57.000 ******** 2026-04-11 02:43:58.135267 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-11 02:43:58.135272 | orchestrator | enable_outward_rabbitmq_True 2026-04-11 02:43:58.135278 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-11 02:43:58.135284 | orchestrator | outward_rabbitmq_restart 2026-04-11 02:43:58.135291 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:43:58.135298 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:43:58.135304 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:43:58.135311 | orchestrator | 2026-04-11 02:43:58.135317 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-11 02:43:58.135324 | orchestrator | skipping: no hosts matched 2026-04-11 02:43:58.135331 | orchestrator | 2026-04-11 02:43:58.135337 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-11 02:43:58.135344 | orchestrator | skipping: no hosts matched 2026-04-11 02:43:58.135350 | orchestrator | 2026-04-11 02:43:58.135357 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-11 02:43:58.135363 | orchestrator | skipping: no hosts matched 2026-04-11 02:43:58.135370 | orchestrator | 2026-04-11 02:43:58.135377 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:43:58.135395 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-11 02:43:58.135403 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:43:58.135410 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:43:58.135417 | orchestrator | 2026-04-11 02:43:58.135424 | orchestrator | 2026-04-11 02:43:58.135431 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:43:58.135437 | orchestrator | Saturday 11 April 2026 02:43:57 +0000 (0:00:02.085) 0:01:59.085 ******** 2026-04-11 02:43:58.135444 | orchestrator | =============================================================================== 2026-04-11 02:43:58.135451 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.68s 2026-04-11 02:43:58.135457 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.68s 2026-04-11 02:43:58.135469 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.06s 2026-04-11 02:43:58.135476 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.09s 2026-04-11 02:43:58.135483 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.00s 2026-04-11 02:43:58.135490 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.79s 2026-04-11 02:43:58.135497 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.72s 2026-04-11 02:43:58.135503 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.68s 2026-04-11 02:43:58.135510 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.58s 2026-04-11 02:43:58.135517 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.48s 2026-04-11 02:43:58.135523 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.47s 2026-04-11 02:43:58.135530 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.41s 2026-04-11 02:43:58.135536 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.26s 2026-04-11 02:43:58.135543 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.05s 2026-04-11 02:43:58.135553 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.95s 2026-04-11 02:43:58.135560 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.94s 2026-04-11 02:43:58.135567 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.87s 2026-04-11 02:43:58.135574 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.85s 2026-04-11 02:43:58.135581 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.74s 2026-04-11 02:43:58.135588 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.61s 2026-04-11 02:44:00.790235 | orchestrator | 2026-04-11 02:44:00 | INFO  | Task e0bdd35a-335c-4ce0-9b32-176f1bdcea19 (openvswitch) was prepared for execution. 2026-04-11 02:44:00.790369 | orchestrator | 2026-04-11 02:44:00 | INFO  | It takes a moment until task e0bdd35a-335c-4ce0-9b32-176f1bdcea19 (openvswitch) has been started and output is visible here. 2026-04-11 02:44:14.269202 | orchestrator | 2026-04-11 02:44:14.269319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:44:14.269336 | orchestrator | 2026-04-11 02:44:14.269349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 02:44:14.269361 | orchestrator | Saturday 11 April 2026 02:44:05 +0000 (0:00:00.313) 0:00:00.313 ******** 2026-04-11 02:44:14.269385 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:44:14.269397 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:44:14.269408 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:44:14.269420 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:44:14.269431 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:44:14.269442 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:44:14.269453 | orchestrator | 2026-04-11 02:44:14.269464 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 02:44:14.269475 | orchestrator | Saturday 11 April 2026 02:44:06 +0000 (0:00:00.746) 0:00:01.059 ******** 2026-04-11 02:44:14.269486 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 02:44:14.269498 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 02:44:14.269509 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 02:44:14.269520 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 02:44:14.269531 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 02:44:14.269542 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 02:44:14.269579 | orchestrator | 2026-04-11 02:44:14.269591 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-11 02:44:14.269602 | orchestrator | 2026-04-11 02:44:14.269614 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-11 02:44:14.269625 | orchestrator | Saturday 11 April 2026 02:44:06 +0000 (0:00:00.645) 0:00:01.705 ******** 2026-04-11 02:44:14.269637 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:44:14.269649 | orchestrator | 2026-04-11 02:44:14.269660 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 02:44:14.269671 | orchestrator | Saturday 11 April 2026 02:44:08 +0000 (0:00:01.264) 0:00:02.970 ******** 2026-04-11 02:44:14.269682 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-11 02:44:14.269694 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-11 02:44:14.269705 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-11 02:44:14.269716 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-11 02:44:14.269727 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-11 02:44:14.269740 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-11 02:44:14.269753 | orchestrator | 2026-04-11 02:44:14.269766 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 02:44:14.269778 | orchestrator | Saturday 11 April 2026 02:44:09 +0000 (0:00:01.173) 0:00:04.143 ******** 2026-04-11 02:44:14.269791 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-11 02:44:14.269804 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-11 02:44:14.269816 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-11 02:44:14.269829 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-11 02:44:14.269842 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-11 02:44:14.269854 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-11 02:44:14.269866 | orchestrator | 2026-04-11 02:44:14.269878 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 02:44:14.269891 | orchestrator | Saturday 11 April 2026 02:44:10 +0000 (0:00:01.504) 0:00:05.647 ******** 2026-04-11 02:44:14.269903 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-11 02:44:14.269916 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:44:14.269929 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-11 02:44:14.269942 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:44:14.269954 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-11 02:44:14.269967 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:44:14.270010 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-11 02:44:14.270086 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:44:14.270098 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-11 02:44:14.270109 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:44:14.270120 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-11 02:44:14.270132 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:44:14.270143 | orchestrator | 2026-04-11 02:44:14.270154 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-11 02:44:14.270165 | orchestrator | Saturday 11 April 2026 02:44:12 +0000 (0:00:01.285) 0:00:06.933 ******** 2026-04-11 02:44:14.270176 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:44:14.270187 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:44:14.270198 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:44:14.270209 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:44:14.270220 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:44:14.270231 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:44:14.270242 | orchestrator | 2026-04-11 02:44:14.270253 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-11 02:44:14.270273 | orchestrator | Saturday 11 April 2026 02:44:12 +0000 (0:00:00.796) 0:00:07.730 ******** 2026-04-11 02:44:14.270310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:14.270329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:14.270341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:14.270392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:14.270411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:14.270431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785653 | orchestrator | 2026-04-11 02:44:16.785661 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-11 02:44:16.785669 | orchestrator | Saturday 11 April 2026 02:44:14 +0000 (0:00:01.456) 0:00:09.186 ******** 2026-04-11 02:44:16.785676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:16.785730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679410 | orchestrator | 2026-04-11 02:44:19.679422 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-11 02:44:19.679434 | orchestrator | Saturday 11 April 2026 02:44:16 +0000 (0:00:02.516) 0:00:11.703 ******** 2026-04-11 02:44:19.679444 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:44:19.679455 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:44:19.679464 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:44:19.679474 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:44:19.679483 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:44:19.679493 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:44:19.679503 | orchestrator | 2026-04-11 02:44:19.679513 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-11 02:44:19.679523 | orchestrator | Saturday 11 April 2026 02:44:17 +0000 (0:00:01.072) 0:00:12.776 ******** 2026-04-11 02:44:19.679533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:19.679597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:45.948900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 02:44:45.948999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:45.949012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:45.949081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:45.949093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:45.949117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:45.949126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 02:44:45.949135 | orchestrator | 2026-04-11 02:44:45.949145 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 02:44:45.949154 | orchestrator | Saturday 11 April 2026 02:44:19 +0000 (0:00:01.820) 0:00:14.596 ******** 2026-04-11 02:44:45.949162 | orchestrator | 2026-04-11 02:44:45.949172 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 02:44:45.949185 | orchestrator | Saturday 11 April 2026 02:44:20 +0000 (0:00:00.404) 0:00:15.000 ******** 2026-04-11 02:44:45.949208 | orchestrator | 2026-04-11 02:44:45.949220 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 02:44:45.949232 | orchestrator | Saturday 11 April 2026 02:44:20 +0000 (0:00:00.176) 0:00:15.176 ******** 2026-04-11 02:44:45.949244 | orchestrator | 2026-04-11 02:44:45.949256 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 02:44:45.949269 | orchestrator | Saturday 11 April 2026 02:44:20 +0000 (0:00:00.157) 0:00:15.334 ******** 2026-04-11 02:44:45.949282 | orchestrator | 2026-04-11 02:44:45.949295 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 02:44:45.949307 | orchestrator | Saturday 11 April 2026 02:44:20 +0000 (0:00:00.156) 0:00:15.490 ******** 2026-04-11 02:44:45.949319 | orchestrator | 2026-04-11 02:44:45.949332 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 02:44:45.949344 | orchestrator | Saturday 11 April 2026 02:44:20 +0000 (0:00:00.145) 0:00:15.636 ******** 2026-04-11 02:44:45.949357 | orchestrator | 2026-04-11 02:44:45.949370 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-11 02:44:45.949383 | orchestrator | Saturday 11 April 2026 02:44:20 +0000 (0:00:00.148) 0:00:15.784 ******** 2026-04-11 02:44:45.949396 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:44:45.949410 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:44:45.949422 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:44:45.949436 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:44:45.949449 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:44:45.949464 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:44:45.949478 | orchestrator | 2026-04-11 02:44:45.949491 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-11 02:44:45.949505 | orchestrator | Saturday 11 April 2026 02:44:30 +0000 (0:00:09.144) 0:00:24.929 ******** 2026-04-11 02:44:45.949528 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:44:45.949544 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:44:45.949559 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:44:45.949572 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:44:45.949585 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:44:45.949599 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:44:45.949613 | orchestrator | 2026-04-11 02:44:45.949627 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-11 02:44:45.949640 | orchestrator | Saturday 11 April 2026 02:44:31 +0000 (0:00:01.162) 0:00:26.091 ******** 2026-04-11 02:44:45.949650 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:44:45.949658 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:44:45.949667 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:44:45.949676 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:44:45.949686 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:44:45.949695 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:44:45.949703 | orchestrator | 2026-04-11 02:44:45.949712 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-11 02:44:45.949721 | orchestrator | Saturday 11 April 2026 02:44:39 +0000 (0:00:08.127) 0:00:34.219 ******** 2026-04-11 02:44:45.949730 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-11 02:44:45.949740 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-11 02:44:45.949749 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-11 02:44:45.949758 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-11 02:44:45.949767 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-11 02:44:45.949776 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-11 02:44:45.949785 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-11 02:44:45.949809 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-11 02:44:59.321692 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-11 02:44:59.321804 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-11 02:44:59.321820 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-11 02:44:59.321831 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-11 02:44:59.321842 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 02:44:59.321852 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 02:44:59.321862 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 02:44:59.321870 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 02:44:59.321877 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 02:44:59.321883 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 02:44:59.321889 | orchestrator | 2026-04-11 02:44:59.321897 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-11 02:44:59.321904 | orchestrator | Saturday 11 April 2026 02:44:45 +0000 (0:00:06.546) 0:00:40.766 ******** 2026-04-11 02:44:59.321912 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-11 02:44:59.321919 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:44:59.321926 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-11 02:44:59.321932 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:44:59.321938 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-11 02:44:59.321944 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:44:59.321950 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-11 02:44:59.321956 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-11 02:44:59.321961 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-11 02:44:59.321967 | orchestrator | 2026-04-11 02:44:59.321974 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-11 02:44:59.321980 | orchestrator | Saturday 11 April 2026 02:44:48 +0000 (0:00:02.544) 0:00:43.311 ******** 2026-04-11 02:44:59.321985 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-11 02:44:59.321991 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:44:59.321997 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-11 02:44:59.322003 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:44:59.322009 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-11 02:44:59.322054 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:44:59.322061 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-11 02:44:59.322067 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-11 02:44:59.322156 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-11 02:44:59.322165 | orchestrator | 2026-04-11 02:44:59.322171 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-11 02:44:59.322177 | orchestrator | Saturday 11 April 2026 02:44:51 +0000 (0:00:03.230) 0:00:46.541 ******** 2026-04-11 02:44:59.322183 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:44:59.322188 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:44:59.322217 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:44:59.322226 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:44:59.322236 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:44:59.322245 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:44:59.322254 | orchestrator | 2026-04-11 02:44:59.322264 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:44:59.322275 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 02:44:59.322287 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 02:44:59.322296 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 02:44:59.322306 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 02:44:59.322318 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 02:44:59.322328 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 02:44:59.322338 | orchestrator | 2026-04-11 02:44:59.322348 | orchestrator | 2026-04-11 02:44:59.322357 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:44:59.322364 | orchestrator | Saturday 11 April 2026 02:44:58 +0000 (0:00:07.141) 0:00:53.682 ******** 2026-04-11 02:44:59.322386 | orchestrator | =============================================================================== 2026-04-11 02:44:59.322393 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.27s 2026-04-11 02:44:59.322400 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.14s 2026-04-11 02:44:59.322407 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.55s 2026-04-11 02:44:59.322413 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.23s 2026-04-11 02:44:59.322420 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.54s 2026-04-11 02:44:59.322427 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.52s 2026-04-11 02:44:59.322433 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.82s 2026-04-11 02:44:59.322440 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.50s 2026-04-11 02:44:59.322446 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.46s 2026-04-11 02:44:59.322454 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.29s 2026-04-11 02:44:59.322460 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.26s 2026-04-11 02:44:59.322467 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.19s 2026-04-11 02:44:59.322473 | orchestrator | module-load : Load modules ---------------------------------------------- 1.17s 2026-04-11 02:44:59.322480 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.16s 2026-04-11 02:44:59.322486 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.07s 2026-04-11 02:44:59.322493 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.80s 2026-04-11 02:44:59.322500 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2026-04-11 02:44:59.322506 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-04-11 02:45:02.017263 | orchestrator | 2026-04-11 02:45:02 | INFO  | Task 2ab5bb85-2dfb-4748-a2ce-6e12a76b4fba (ovn) was prepared for execution. 2026-04-11 02:45:02.017363 | orchestrator | 2026-04-11 02:45:02 | INFO  | It takes a moment until task 2ab5bb85-2dfb-4748-a2ce-6e12a76b4fba (ovn) has been started and output is visible here. 2026-04-11 02:45:13.554110 | orchestrator | 2026-04-11 02:45:13.554257 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 02:45:13.554271 | orchestrator | 2026-04-11 02:45:13.554280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 02:45:13.554289 | orchestrator | Saturday 11 April 2026 02:45:06 +0000 (0:00:00.183) 0:00:00.183 ******** 2026-04-11 02:45:13.554298 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:45:13.554308 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:45:13.554317 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:45:13.554327 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:45:13.554335 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:45:13.554344 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:45:13.554353 | orchestrator | 2026-04-11 02:45:13.554364 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 02:45:13.554388 | orchestrator | Saturday 11 April 2026 02:45:07 +0000 (0:00:00.788) 0:00:00.971 ******** 2026-04-11 02:45:13.554398 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-11 02:45:13.554409 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-11 02:45:13.554419 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-11 02:45:13.554428 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-11 02:45:13.554438 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-11 02:45:13.554447 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-11 02:45:13.554455 | orchestrator | 2026-04-11 02:45:13.554466 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-11 02:45:13.554475 | orchestrator | 2026-04-11 02:45:13.554485 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-11 02:45:13.554495 | orchestrator | Saturday 11 April 2026 02:45:08 +0000 (0:00:00.883) 0:00:01.855 ******** 2026-04-11 02:45:13.554505 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:45:13.554515 | orchestrator | 2026-04-11 02:45:13.554525 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-11 02:45:13.554536 | orchestrator | Saturday 11 April 2026 02:45:09 +0000 (0:00:01.217) 0:00:03.073 ******** 2026-04-11 02:45:13.554548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554630 | orchestrator | 2026-04-11 02:45:13.554636 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-11 02:45:13.554642 | orchestrator | Saturday 11 April 2026 02:45:10 +0000 (0:00:01.251) 0:00:04.325 ******** 2026-04-11 02:45:13.554653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554666 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554696 | orchestrator | 2026-04-11 02:45:13.554704 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-11 02:45:13.554713 | orchestrator | Saturday 11 April 2026 02:45:12 +0000 (0:00:01.510) 0:00:05.835 ******** 2026-04-11 02:45:13.554722 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:13.554748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.051694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.051823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.051845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.051865 | orchestrator | 2026-04-11 02:45:38.051882 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-11 02:45:38.051901 | orchestrator | Saturday 11 April 2026 02:45:13 +0000 (0:00:01.175) 0:00:07.010 ******** 2026-04-11 02:45:38.051918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.051963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052131 | orchestrator | 2026-04-11 02:45:38.052161 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-11 02:45:38.052202 | orchestrator | Saturday 11 April 2026 02:45:15 +0000 (0:00:01.597) 0:00:08.608 ******** 2026-04-11 02:45:38.052231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:45:38.052354 | orchestrator | 2026-04-11 02:45:38.052372 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-11 02:45:38.052390 | orchestrator | Saturday 11 April 2026 02:45:16 +0000 (0:00:01.483) 0:00:10.092 ******** 2026-04-11 02:45:38.052408 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:45:38.052427 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:45:38.052445 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:45:38.052463 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:45:38.052481 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:45:38.052496 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:45:38.052514 | orchestrator | 2026-04-11 02:45:38.052530 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-11 02:45:38.052545 | orchestrator | Saturday 11 April 2026 02:45:19 +0000 (0:00:02.512) 0:00:12.605 ******** 2026-04-11 02:45:38.052561 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-11 02:45:38.052578 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-11 02:45:38.052592 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-11 02:45:38.052606 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-11 02:45:38.052619 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-11 02:45:38.052632 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-11 02:45:38.052657 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 02:46:17.636492 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 02:46:17.636608 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 02:46:17.636644 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 02:46:17.636665 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 02:46:17.636684 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 02:46:17.636704 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-11 02:46:17.636726 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-11 02:46:17.636781 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-11 02:46:17.636802 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-11 02:46:17.636823 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-11 02:46:17.636842 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-11 02:46:17.636861 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 02:46:17.636874 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 02:46:17.636885 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 02:46:17.636896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 02:46:17.636907 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 02:46:17.636918 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 02:46:17.636929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 02:46:17.636939 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 02:46:17.636950 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 02:46:17.636960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 02:46:17.636971 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 02:46:17.636982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 02:46:17.636993 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 02:46:17.637005 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 02:46:17.637018 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 02:46:17.637031 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 02:46:17.637044 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 02:46:17.637056 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 02:46:17.637069 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-11 02:46:17.637082 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-11 02:46:17.637095 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-11 02:46:17.637107 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-11 02:46:17.637119 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-11 02:46:17.637132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-11 02:46:17.637144 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-11 02:46:17.637189 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-11 02:46:17.637203 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-11 02:46:17.637223 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-11 02:46:17.637236 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-11 02:46:17.637249 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-11 02:46:17.637289 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-11 02:46:17.637302 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-11 02:46:17.637314 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-11 02:46:17.637327 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-11 02:46:17.637340 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-11 02:46:17.637353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-11 02:46:17.637365 | orchestrator | 2026-04-11 02:46:17.637377 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 02:46:17.637388 | orchestrator | Saturday 11 April 2026 02:45:37 +0000 (0:00:18.218) 0:00:30.824 ******** 2026-04-11 02:46:17.637399 | orchestrator | 2026-04-11 02:46:17.637410 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 02:46:17.637421 | orchestrator | Saturday 11 April 2026 02:45:37 +0000 (0:00:00.264) 0:00:31.088 ******** 2026-04-11 02:46:17.637432 | orchestrator | 2026-04-11 02:46:17.637443 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 02:46:17.637454 | orchestrator | Saturday 11 April 2026 02:45:37 +0000 (0:00:00.069) 0:00:31.158 ******** 2026-04-11 02:46:17.637465 | orchestrator | 2026-04-11 02:46:17.637476 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 02:46:17.637487 | orchestrator | Saturday 11 April 2026 02:45:37 +0000 (0:00:00.074) 0:00:31.233 ******** 2026-04-11 02:46:17.637498 | orchestrator | 2026-04-11 02:46:17.637509 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 02:46:17.637519 | orchestrator | Saturday 11 April 2026 02:45:37 +0000 (0:00:00.077) 0:00:31.310 ******** 2026-04-11 02:46:17.637530 | orchestrator | 2026-04-11 02:46:17.637541 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 02:46:17.637552 | orchestrator | Saturday 11 April 2026 02:45:37 +0000 (0:00:00.093) 0:00:31.404 ******** 2026-04-11 02:46:17.637563 | orchestrator | 2026-04-11 02:46:17.637574 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-11 02:46:17.637585 | orchestrator | Saturday 11 April 2026 02:45:38 +0000 (0:00:00.099) 0:00:31.504 ******** 2026-04-11 02:46:17.637596 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:46:17.637609 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:46:17.637620 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:46:17.637631 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:17.637642 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:17.637652 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:17.637663 | orchestrator | 2026-04-11 02:46:17.637675 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-11 02:46:17.637686 | orchestrator | Saturday 11 April 2026 02:45:39 +0000 (0:00:01.593) 0:00:33.097 ******** 2026-04-11 02:46:17.637704 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:46:17.637716 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:46:17.637727 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:46:17.637737 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:46:17.637748 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:46:17.637759 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:46:17.637770 | orchestrator | 2026-04-11 02:46:17.637781 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-11 02:46:17.637792 | orchestrator | 2026-04-11 02:46:17.637803 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-11 02:46:17.637814 | orchestrator | Saturday 11 April 2026 02:46:15 +0000 (0:00:35.670) 0:01:08.768 ******** 2026-04-11 02:46:17.637825 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:46:17.637836 | orchestrator | 2026-04-11 02:46:17.637847 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-11 02:46:17.637858 | orchestrator | Saturday 11 April 2026 02:46:16 +0000 (0:00:00.739) 0:01:09.507 ******** 2026-04-11 02:46:17.637869 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:46:17.637880 | orchestrator | 2026-04-11 02:46:17.637891 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-11 02:46:17.637902 | orchestrator | Saturday 11 April 2026 02:46:16 +0000 (0:00:00.562) 0:01:10.070 ******** 2026-04-11 02:46:17.637913 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:17.637924 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:17.637935 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:17.637946 | orchestrator | 2026-04-11 02:46:17.637957 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-11 02:46:17.637975 | orchestrator | Saturday 11 April 2026 02:46:17 +0000 (0:00:01.017) 0:01:11.087 ******** 2026-04-11 02:46:29.596132 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:29.596243 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:29.596261 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:29.596274 | orchestrator | 2026-04-11 02:46:29.596340 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-11 02:46:29.596369 | orchestrator | Saturday 11 April 2026 02:46:18 +0000 (0:00:00.397) 0:01:11.485 ******** 2026-04-11 02:46:29.596381 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:29.596393 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:29.596403 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:29.596414 | orchestrator | 2026-04-11 02:46:29.596425 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-11 02:46:29.596437 | orchestrator | Saturday 11 April 2026 02:46:18 +0000 (0:00:00.373) 0:01:11.859 ******** 2026-04-11 02:46:29.596447 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:29.596458 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:29.596469 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:29.596480 | orchestrator | 2026-04-11 02:46:29.596491 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-11 02:46:29.596516 | orchestrator | Saturday 11 April 2026 02:46:18 +0000 (0:00:00.325) 0:01:12.184 ******** 2026-04-11 02:46:29.596527 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:29.596538 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:29.596548 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:29.596559 | orchestrator | 2026-04-11 02:46:29.596570 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-11 02:46:29.596581 | orchestrator | Saturday 11 April 2026 02:46:19 +0000 (0:00:00.547) 0:01:12.732 ******** 2026-04-11 02:46:29.596592 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.596604 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.596615 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.596626 | orchestrator | 2026-04-11 02:46:29.596637 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-11 02:46:29.596670 | orchestrator | Saturday 11 April 2026 02:46:19 +0000 (0:00:00.338) 0:01:13.071 ******** 2026-04-11 02:46:29.596684 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.596697 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.596709 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.596722 | orchestrator | 2026-04-11 02:46:29.596734 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-11 02:46:29.596747 | orchestrator | Saturday 11 April 2026 02:46:19 +0000 (0:00:00.328) 0:01:13.399 ******** 2026-04-11 02:46:29.596760 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.596773 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.596785 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.596798 | orchestrator | 2026-04-11 02:46:29.596811 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-11 02:46:29.596823 | orchestrator | Saturday 11 April 2026 02:46:20 +0000 (0:00:00.330) 0:01:13.730 ******** 2026-04-11 02:46:29.596836 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.596848 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.596861 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.596873 | orchestrator | 2026-04-11 02:46:29.596887 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-11 02:46:29.596907 | orchestrator | Saturday 11 April 2026 02:46:20 +0000 (0:00:00.326) 0:01:14.056 ******** 2026-04-11 02:46:29.596926 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.596945 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.596964 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.596982 | orchestrator | 2026-04-11 02:46:29.597000 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-11 02:46:29.597019 | orchestrator | Saturday 11 April 2026 02:46:21 +0000 (0:00:00.521) 0:01:14.578 ******** 2026-04-11 02:46:29.597035 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597051 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597069 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597087 | orchestrator | 2026-04-11 02:46:29.597106 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-11 02:46:29.597125 | orchestrator | Saturday 11 April 2026 02:46:21 +0000 (0:00:00.331) 0:01:14.910 ******** 2026-04-11 02:46:29.597143 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597160 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597180 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597198 | orchestrator | 2026-04-11 02:46:29.597214 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-11 02:46:29.597225 | orchestrator | Saturday 11 April 2026 02:46:21 +0000 (0:00:00.317) 0:01:15.227 ******** 2026-04-11 02:46:29.597236 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597247 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597258 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597268 | orchestrator | 2026-04-11 02:46:29.597311 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-11 02:46:29.597324 | orchestrator | Saturday 11 April 2026 02:46:22 +0000 (0:00:00.317) 0:01:15.545 ******** 2026-04-11 02:46:29.597335 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597345 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597356 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597366 | orchestrator | 2026-04-11 02:46:29.597377 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-11 02:46:29.597388 | orchestrator | Saturday 11 April 2026 02:46:22 +0000 (0:00:00.583) 0:01:16.128 ******** 2026-04-11 02:46:29.597399 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597409 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597420 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597432 | orchestrator | 2026-04-11 02:46:29.597443 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-11 02:46:29.597465 | orchestrator | Saturday 11 April 2026 02:46:22 +0000 (0:00:00.329) 0:01:16.457 ******** 2026-04-11 02:46:29.597476 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597487 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597497 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597508 | orchestrator | 2026-04-11 02:46:29.597519 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-11 02:46:29.597529 | orchestrator | Saturday 11 April 2026 02:46:23 +0000 (0:00:00.324) 0:01:16.782 ******** 2026-04-11 02:46:29.597561 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597572 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597583 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597594 | orchestrator | 2026-04-11 02:46:29.597605 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-11 02:46:29.597623 | orchestrator | Saturday 11 April 2026 02:46:23 +0000 (0:00:00.328) 0:01:17.110 ******** 2026-04-11 02:46:29.597635 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:46:29.597646 | orchestrator | 2026-04-11 02:46:29.597657 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-11 02:46:29.597668 | orchestrator | Saturday 11 April 2026 02:46:24 +0000 (0:00:00.816) 0:01:17.927 ******** 2026-04-11 02:46:29.597678 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:29.597689 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:29.597700 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:29.597711 | orchestrator | 2026-04-11 02:46:29.597722 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-11 02:46:29.597732 | orchestrator | Saturday 11 April 2026 02:46:24 +0000 (0:00:00.464) 0:01:18.391 ******** 2026-04-11 02:46:29.597743 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:29.597754 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:29.597764 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:29.597775 | orchestrator | 2026-04-11 02:46:29.597786 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-11 02:46:29.597797 | orchestrator | Saturday 11 April 2026 02:46:25 +0000 (0:00:00.463) 0:01:18.854 ******** 2026-04-11 02:46:29.597808 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597819 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597830 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597841 | orchestrator | 2026-04-11 02:46:29.597852 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-11 02:46:29.597862 | orchestrator | Saturday 11 April 2026 02:46:25 +0000 (0:00:00.372) 0:01:19.227 ******** 2026-04-11 02:46:29.597873 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597884 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597895 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597906 | orchestrator | 2026-04-11 02:46:29.597917 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-11 02:46:29.597927 | orchestrator | Saturday 11 April 2026 02:46:26 +0000 (0:00:00.642) 0:01:19.870 ******** 2026-04-11 02:46:29.597938 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.597949 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.597960 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.597970 | orchestrator | 2026-04-11 02:46:29.597981 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-11 02:46:29.597992 | orchestrator | Saturday 11 April 2026 02:46:26 +0000 (0:00:00.381) 0:01:20.251 ******** 2026-04-11 02:46:29.598002 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.598013 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.598089 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.598100 | orchestrator | 2026-04-11 02:46:29.598116 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-11 02:46:29.598135 | orchestrator | Saturday 11 April 2026 02:46:27 +0000 (0:00:00.361) 0:01:20.613 ******** 2026-04-11 02:46:29.598169 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.598188 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.598205 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.598222 | orchestrator | 2026-04-11 02:46:29.598239 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-11 02:46:29.598256 | orchestrator | Saturday 11 April 2026 02:46:27 +0000 (0:00:00.361) 0:01:20.974 ******** 2026-04-11 02:46:29.598272 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:29.598364 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:29.598381 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:29.598397 | orchestrator | 2026-04-11 02:46:29.598413 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-11 02:46:29.598430 | orchestrator | Saturday 11 April 2026 02:46:28 +0000 (0:00:00.611) 0:01:21.585 ******** 2026-04-11 02:46:29.598451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:29.598472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:29.598490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:29.598540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010711 | orchestrator | 2026-04-11 02:46:36.010721 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-11 02:46:36.010730 | orchestrator | Saturday 11 April 2026 02:46:29 +0000 (0:00:01.465) 0:01:23.051 ******** 2026-04-11 02:46:36.010739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010854 | orchestrator | 2026-04-11 02:46:36.010862 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-11 02:46:36.010869 | orchestrator | Saturday 11 April 2026 02:46:33 +0000 (0:00:03.908) 0:01:26.959 ******** 2026-04-11 02:46:36.010878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:36.010925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.603793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.603955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.603981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.603998 | orchestrator | 2026-04-11 02:46:56.604044 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 02:46:56.604065 | orchestrator | Saturday 11 April 2026 02:46:35 +0000 (0:00:02.034) 0:01:28.994 ******** 2026-04-11 02:46:56.604081 | orchestrator | 2026-04-11 02:46:56.604098 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 02:46:56.604114 | orchestrator | Saturday 11 April 2026 02:46:35 +0000 (0:00:00.069) 0:01:29.063 ******** 2026-04-11 02:46:56.604130 | orchestrator | 2026-04-11 02:46:56.604144 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 02:46:56.604153 | orchestrator | Saturday 11 April 2026 02:46:35 +0000 (0:00:00.319) 0:01:29.383 ******** 2026-04-11 02:46:56.604163 | orchestrator | 2026-04-11 02:46:56.604172 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-11 02:46:56.604182 | orchestrator | Saturday 11 April 2026 02:46:35 +0000 (0:00:00.076) 0:01:29.459 ******** 2026-04-11 02:46:56.604192 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:46:56.604203 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:46:56.604213 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:46:56.604222 | orchestrator | 2026-04-11 02:46:56.604232 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-11 02:46:56.604241 | orchestrator | Saturday 11 April 2026 02:46:43 +0000 (0:00:07.859) 0:01:37.318 ******** 2026-04-11 02:46:56.604251 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:46:56.604260 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:46:56.604271 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:46:56.604286 | orchestrator | 2026-04-11 02:46:56.604311 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-11 02:46:56.604327 | orchestrator | Saturday 11 April 2026 02:46:46 +0000 (0:00:02.881) 0:01:40.199 ******** 2026-04-11 02:46:56.604371 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:46:56.604387 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:46:56.604406 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:46:56.604427 | orchestrator | 2026-04-11 02:46:56.604440 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-11 02:46:56.604455 | orchestrator | Saturday 11 April 2026 02:46:49 +0000 (0:00:02.726) 0:01:42.926 ******** 2026-04-11 02:46:56.604470 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:46:56.604485 | orchestrator | 2026-04-11 02:46:56.604499 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-11 02:46:56.604513 | orchestrator | Saturday 11 April 2026 02:46:49 +0000 (0:00:00.141) 0:01:43.067 ******** 2026-04-11 02:46:56.604528 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:56.604545 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:56.604560 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:56.604575 | orchestrator | 2026-04-11 02:46:56.604591 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-11 02:46:56.604607 | orchestrator | Saturday 11 April 2026 02:46:50 +0000 (0:00:01.017) 0:01:44.085 ******** 2026-04-11 02:46:56.604622 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:56.604654 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:56.604670 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:46:56.604685 | orchestrator | 2026-04-11 02:46:56.604701 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-11 02:46:56.604717 | orchestrator | Saturday 11 April 2026 02:46:51 +0000 (0:00:00.632) 0:01:44.718 ******** 2026-04-11 02:46:56.604758 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:56.604794 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:56.604831 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:56.604866 | orchestrator | 2026-04-11 02:46:56.604883 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-11 02:46:56.604918 | orchestrator | Saturday 11 April 2026 02:46:52 +0000 (0:00:00.829) 0:01:45.547 ******** 2026-04-11 02:46:56.604934 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:46:56.604948 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:46:56.604962 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:46:56.604977 | orchestrator | 2026-04-11 02:46:56.604992 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-11 02:46:56.605007 | orchestrator | Saturday 11 April 2026 02:46:52 +0000 (0:00:00.620) 0:01:46.168 ******** 2026-04-11 02:46:56.605023 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:56.605039 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:56.605084 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:56.605103 | orchestrator | 2026-04-11 02:46:56.605119 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-11 02:46:56.605134 | orchestrator | Saturday 11 April 2026 02:46:53 +0000 (0:00:01.252) 0:01:47.420 ******** 2026-04-11 02:46:56.605151 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:56.605167 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:56.605483 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:56.605511 | orchestrator | 2026-04-11 02:46:56.605531 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-11 02:46:56.605548 | orchestrator | Saturday 11 April 2026 02:46:54 +0000 (0:00:00.792) 0:01:48.213 ******** 2026-04-11 02:46:56.605567 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:46:56.605585 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:46:56.605602 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:46:56.605618 | orchestrator | 2026-04-11 02:46:56.605636 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-11 02:46:56.605654 | orchestrator | Saturday 11 April 2026 02:46:55 +0000 (0:00:00.364) 0:01:48.577 ******** 2026-04-11 02:46:56.605675 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.605706 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.605730 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.605754 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.605797 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.605808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.605830 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.605863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:46:56.605900 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855070 | orchestrator | 2026-04-11 02:47:03.855163 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-11 02:47:03.855176 | orchestrator | Saturday 11 April 2026 02:46:56 +0000 (0:00:01.473) 0:01:50.051 ******** 2026-04-11 02:47:03.855188 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855206 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855221 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855236 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855303 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855456 | orchestrator | 2026-04-11 02:47:03.855470 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-11 02:47:03.855483 | orchestrator | Saturday 11 April 2026 02:47:00 +0000 (0:00:04.021) 0:01:54.072 ******** 2026-04-11 02:47:03.855514 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855523 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855531 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855540 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 02:47:03.855603 | orchestrator | 2026-04-11 02:47:03.855611 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 02:47:03.855621 | orchestrator | Saturday 11 April 2026 02:47:03 +0000 (0:00:03.007) 0:01:57.079 ******** 2026-04-11 02:47:03.855630 | orchestrator | 2026-04-11 02:47:03.855639 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 02:47:03.855664 | orchestrator | Saturday 11 April 2026 02:47:03 +0000 (0:00:00.071) 0:01:57.151 ******** 2026-04-11 02:47:03.855674 | orchestrator | 2026-04-11 02:47:03.855687 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 02:47:03.855705 | orchestrator | Saturday 11 April 2026 02:47:03 +0000 (0:00:00.079) 0:01:57.230 ******** 2026-04-11 02:47:03.855724 | orchestrator | 2026-04-11 02:47:03.855746 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-11 02:47:28.337921 | orchestrator | Saturday 11 April 2026 02:47:03 +0000 (0:00:00.069) 0:01:57.300 ******** 2026-04-11 02:47:28.337996 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:47:28.338003 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:47:28.338007 | orchestrator | 2026-04-11 02:47:28.338043 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-11 02:47:28.338048 | orchestrator | Saturday 11 April 2026 02:47:10 +0000 (0:00:06.262) 0:02:03.562 ******** 2026-04-11 02:47:28.338052 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:47:28.338056 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:47:28.338061 | orchestrator | 2026-04-11 02:47:28.338082 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-11 02:47:28.338087 | orchestrator | Saturday 11 April 2026 02:47:16 +0000 (0:00:06.188) 0:02:09.751 ******** 2026-04-11 02:47:28.338091 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:47:28.338095 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:47:28.338099 | orchestrator | 2026-04-11 02:47:28.338103 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-11 02:47:28.338108 | orchestrator | Saturday 11 April 2026 02:47:22 +0000 (0:00:06.242) 0:02:15.994 ******** 2026-04-11 02:47:28.338112 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:47:28.338116 | orchestrator | 2026-04-11 02:47:28.338120 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-11 02:47:28.338124 | orchestrator | Saturday 11 April 2026 02:47:22 +0000 (0:00:00.135) 0:02:16.129 ******** 2026-04-11 02:47:28.338128 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:47:28.338134 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:47:28.338138 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:47:28.338142 | orchestrator | 2026-04-11 02:47:28.338146 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-11 02:47:28.338150 | orchestrator | Saturday 11 April 2026 02:47:23 +0000 (0:00:01.135) 0:02:17.265 ******** 2026-04-11 02:47:28.338154 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:47:28.338158 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:47:28.338162 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:47:28.338166 | orchestrator | 2026-04-11 02:47:28.338170 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-11 02:47:28.338174 | orchestrator | Saturday 11 April 2026 02:47:24 +0000 (0:00:00.618) 0:02:17.883 ******** 2026-04-11 02:47:28.338179 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:47:28.338183 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:47:28.338187 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:47:28.338191 | orchestrator | 2026-04-11 02:47:28.338195 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-11 02:47:28.338199 | orchestrator | Saturday 11 April 2026 02:47:25 +0000 (0:00:00.778) 0:02:18.662 ******** 2026-04-11 02:47:28.338204 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:47:28.338209 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:47:28.338215 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:47:28.338222 | orchestrator | 2026-04-11 02:47:28.338228 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-11 02:47:28.338237 | orchestrator | Saturday 11 April 2026 02:47:25 +0000 (0:00:00.651) 0:02:19.313 ******** 2026-04-11 02:47:28.338244 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:47:28.338252 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:47:28.338258 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:47:28.338264 | orchestrator | 2026-04-11 02:47:28.338271 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-11 02:47:28.338277 | orchestrator | Saturday 11 April 2026 02:47:26 +0000 (0:00:01.071) 0:02:20.385 ******** 2026-04-11 02:47:28.338282 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:47:28.338288 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:47:28.338293 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:47:28.338299 | orchestrator | 2026-04-11 02:47:28.338305 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:47:28.338312 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-11 02:47:28.338319 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-11 02:47:28.338326 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-11 02:47:28.338332 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:47:28.338345 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:47:28.338350 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 02:47:28.338356 | orchestrator | 2026-04-11 02:47:28.338361 | orchestrator | 2026-04-11 02:47:28.338379 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:47:28.338386 | orchestrator | Saturday 11 April 2026 02:47:27 +0000 (0:00:00.918) 0:02:21.303 ******** 2026-04-11 02:47:28.338462 | orchestrator | =============================================================================== 2026-04-11 02:47:28.338473 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.67s 2026-04-11 02:47:28.338478 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.22s 2026-04-11 02:47:28.338484 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.12s 2026-04-11 02:47:28.338491 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.07s 2026-04-11 02:47:28.338497 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.97s 2026-04-11 02:47:28.338520 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.02s 2026-04-11 02:47:28.338527 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.91s 2026-04-11 02:47:28.338533 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.01s 2026-04-11 02:47:28.338539 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.51s 2026-04-11 02:47:28.338546 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.03s 2026-04-11 02:47:28.338552 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.60s 2026-04-11 02:47:28.338559 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.59s 2026-04-11 02:47:28.338565 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.51s 2026-04-11 02:47:28.338571 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.48s 2026-04-11 02:47:28.338577 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-04-11 02:47:28.338584 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-04-11 02:47:28.338592 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.25s 2026-04-11 02:47:28.338598 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.25s 2026-04-11 02:47:28.338605 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.22s 2026-04-11 02:47:28.338612 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.18s 2026-04-11 02:47:28.704607 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 02:47:28.704680 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-04-11 02:47:31.063634 | orchestrator | 2026-04-11 02:47:31 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-11 02:47:41.285559 | orchestrator | 2026-04-11 02:47:41 | INFO  | Task 50ee7d03-0228-4b7b-b723-084cfcd8cb79 (wipe-partitions) was prepared for execution. 2026-04-11 02:47:41.285692 | orchestrator | 2026-04-11 02:47:41 | INFO  | It takes a moment until task 50ee7d03-0228-4b7b-b723-084cfcd8cb79 (wipe-partitions) has been started and output is visible here. 2026-04-11 02:47:54.897058 | orchestrator | 2026-04-11 02:47:54.897148 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-11 02:47:54.897160 | orchestrator | 2026-04-11 02:47:54.897167 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-11 02:47:54.897175 | orchestrator | Saturday 11 April 2026 02:47:46 +0000 (0:00:00.163) 0:00:00.163 ******** 2026-04-11 02:47:54.897205 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:47:54.897213 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:47:54.897220 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:47:54.897227 | orchestrator | 2026-04-11 02:47:54.897234 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-11 02:47:54.897241 | orchestrator | Saturday 11 April 2026 02:47:46 +0000 (0:00:00.636) 0:00:00.800 ******** 2026-04-11 02:47:54.897248 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:47:54.897255 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:47:54.897262 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:47:54.897270 | orchestrator | 2026-04-11 02:47:54.897277 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-11 02:47:54.897285 | orchestrator | Saturday 11 April 2026 02:47:47 +0000 (0:00:00.427) 0:00:01.228 ******** 2026-04-11 02:47:54.897291 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:47:54.897296 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:47:54.897301 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:47:54.897305 | orchestrator | 2026-04-11 02:47:54.897310 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-11 02:47:54.897314 | orchestrator | Saturday 11 April 2026 02:47:47 +0000 (0:00:00.623) 0:00:01.852 ******** 2026-04-11 02:47:54.897319 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:47:54.897323 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:47:54.897328 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:47:54.897333 | orchestrator | 2026-04-11 02:47:54.897337 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-11 02:47:54.897341 | orchestrator | Saturday 11 April 2026 02:47:48 +0000 (0:00:00.329) 0:00:02.181 ******** 2026-04-11 02:47:54.897346 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-11 02:47:54.897351 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-11 02:47:54.897355 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-11 02:47:54.897360 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-11 02:47:54.897364 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-11 02:47:54.897368 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-11 02:47:54.897383 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-11 02:47:54.897387 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-11 02:47:54.897392 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-11 02:47:54.897396 | orchestrator | 2026-04-11 02:47:54.897400 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-11 02:47:54.897405 | orchestrator | Saturday 11 April 2026 02:47:49 +0000 (0:00:01.240) 0:00:03.422 ******** 2026-04-11 02:47:54.897409 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-11 02:47:54.897414 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-11 02:47:54.897418 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-11 02:47:54.897422 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-11 02:47:54.897426 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-11 02:47:54.897431 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-11 02:47:54.897435 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-11 02:47:54.897496 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-11 02:47:54.897502 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-11 02:47:54.897507 | orchestrator | 2026-04-11 02:47:54.897511 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-11 02:47:54.897515 | orchestrator | Saturday 11 April 2026 02:47:50 +0000 (0:00:01.645) 0:00:05.067 ******** 2026-04-11 02:47:54.897519 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-11 02:47:54.897524 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-11 02:47:54.897528 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-11 02:47:54.897533 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-11 02:47:54.897543 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-11 02:47:54.897547 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-11 02:47:54.897552 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-11 02:47:54.897556 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-11 02:47:54.897560 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-11 02:47:54.897565 | orchestrator | 2026-04-11 02:47:54.897569 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-11 02:47:54.897574 | orchestrator | Saturday 11 April 2026 02:47:53 +0000 (0:00:02.195) 0:00:07.263 ******** 2026-04-11 02:47:54.897578 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:47:54.897582 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:47:54.897587 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:47:54.897591 | orchestrator | 2026-04-11 02:47:54.897595 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-11 02:47:54.897600 | orchestrator | Saturday 11 April 2026 02:47:53 +0000 (0:00:00.680) 0:00:07.943 ******** 2026-04-11 02:47:54.897604 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:47:54.897608 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:47:54.897613 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:47:54.897617 | orchestrator | 2026-04-11 02:47:54.897621 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:47:54.897627 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:47:54.897633 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:47:54.897650 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:47:54.897655 | orchestrator | 2026-04-11 02:47:54.897659 | orchestrator | 2026-04-11 02:47:54.897664 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:47:54.897668 | orchestrator | Saturday 11 April 2026 02:47:54 +0000 (0:00:00.659) 0:00:08.603 ******** 2026-04-11 02:47:54.897672 | orchestrator | =============================================================================== 2026-04-11 02:47:54.897677 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.20s 2026-04-11 02:47:54.897681 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.65s 2026-04-11 02:47:54.897685 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2026-04-11 02:47:54.897689 | orchestrator | Reload udev rules ------------------------------------------------------- 0.68s 2026-04-11 02:47:54.897694 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-04-11 02:47:54.897698 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.64s 2026-04-11 02:47:54.897702 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2026-04-11 02:47:54.897708 | orchestrator | Remove all rook related logical devices --------------------------------- 0.43s 2026-04-11 02:47:54.897715 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.33s 2026-04-11 02:48:07.680808 | orchestrator | 2026-04-11 02:48:07 | INFO  | Task 0c1c238d-6512-468b-a7c7-7d8383a25284 (facts) was prepared for execution. 2026-04-11 02:48:07.680947 | orchestrator | 2026-04-11 02:48:07 | INFO  | It takes a moment until task 0c1c238d-6512-468b-a7c7-7d8383a25284 (facts) has been started and output is visible here. 2026-04-11 02:48:21.845758 | orchestrator | 2026-04-11 02:48:21.845911 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-11 02:48:21.845934 | orchestrator | 2026-04-11 02:48:21.845949 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 02:48:21.845999 | orchestrator | Saturday 11 April 2026 02:48:12 +0000 (0:00:00.336) 0:00:00.336 ******** 2026-04-11 02:48:21.846083 | orchestrator | ok: [testbed-manager] 2026-04-11 02:48:21.846107 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:48:21.846121 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:48:21.846174 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:48:21.846188 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:48:21.846202 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:48:21.846216 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:48:21.846231 | orchestrator | 2026-04-11 02:48:21.846245 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 02:48:21.846260 | orchestrator | Saturday 11 April 2026 02:48:13 +0000 (0:00:01.289) 0:00:01.625 ******** 2026-04-11 02:48:21.846274 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:48:21.846290 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:48:21.846305 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:48:21.846319 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:48:21.846333 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:21.846347 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:21.846360 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:48:21.846375 | orchestrator | 2026-04-11 02:48:21.846389 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 02:48:21.846404 | orchestrator | 2026-04-11 02:48:21.846420 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 02:48:21.846434 | orchestrator | Saturday 11 April 2026 02:48:15 +0000 (0:00:01.429) 0:00:03.054 ******** 2026-04-11 02:48:21.846449 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:48:21.846465 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:48:21.846479 | orchestrator | ok: [testbed-manager] 2026-04-11 02:48:21.846695 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:48:21.846717 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:48:21.846731 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:48:21.846747 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:48:21.846762 | orchestrator | 2026-04-11 02:48:21.846777 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 02:48:21.846793 | orchestrator | 2026-04-11 02:48:21.846807 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 02:48:21.846821 | orchestrator | Saturday 11 April 2026 02:48:20 +0000 (0:00:05.349) 0:00:08.404 ******** 2026-04-11 02:48:21.846836 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:48:21.846851 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:48:21.846866 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:48:21.846881 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:48:21.846894 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:21.846909 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:21.846923 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:48:21.846938 | orchestrator | 2026-04-11 02:48:21.846953 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:48:21.846967 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:48:21.847035 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:48:21.847052 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:48:21.847066 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:48:21.847079 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:48:21.847092 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:48:21.847122 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:48:21.847137 | orchestrator | 2026-04-11 02:48:21.847150 | orchestrator | 2026-04-11 02:48:21.847164 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:48:21.847178 | orchestrator | Saturday 11 April 2026 02:48:21 +0000 (0:00:00.637) 0:00:09.042 ******** 2026-04-11 02:48:21.847192 | orchestrator | =============================================================================== 2026-04-11 02:48:21.847208 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2026-04-11 02:48:21.847222 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.43s 2026-04-11 02:48:21.847238 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-04-11 02:48:21.847252 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2026-04-11 02:48:24.667489 | orchestrator | 2026-04-11 02:48:24 | INFO  | Task 5d83f6c6-bee1-420b-9f21-bde294fae54f (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-11 02:48:24.667621 | orchestrator | 2026-04-11 02:48:24 | INFO  | It takes a moment until task 5d83f6c6-bee1-420b-9f21-bde294fae54f (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-11 02:48:38.200141 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 02:48:38.200255 | orchestrator | 2.16.14 2026-04-11 02:48:38.200272 | orchestrator | 2026-04-11 02:48:38.200284 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-11 02:48:38.200295 | orchestrator | 2026-04-11 02:48:38.200306 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 02:48:38.200317 | orchestrator | Saturday 11 April 2026 02:48:29 +0000 (0:00:00.437) 0:00:00.437 ******** 2026-04-11 02:48:38.200327 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 02:48:38.200337 | orchestrator | 2026-04-11 02:48:38.200362 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 02:48:38.200372 | orchestrator | Saturday 11 April 2026 02:48:30 +0000 (0:00:00.271) 0:00:00.709 ******** 2026-04-11 02:48:38.200383 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:48:38.200393 | orchestrator | 2026-04-11 02:48:38.200402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.200412 | orchestrator | Saturday 11 April 2026 02:48:30 +0000 (0:00:00.265) 0:00:00.975 ******** 2026-04-11 02:48:38.200422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-11 02:48:38.200432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-11 02:48:38.200441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-11 02:48:38.200451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-11 02:48:38.200461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-11 02:48:38.200470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-11 02:48:38.200480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-11 02:48:38.200490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-11 02:48:38.200499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-11 02:48:38.200509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-11 02:48:38.200608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-11 02:48:38.200620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-11 02:48:38.200654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-11 02:48:38.200665 | orchestrator | 2026-04-11 02:48:38.200677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.200688 | orchestrator | Saturday 11 April 2026 02:48:30 +0000 (0:00:00.539) 0:00:01.514 ******** 2026-04-11 02:48:38.200699 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.200711 | orchestrator | 2026-04-11 02:48:38.200722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.200733 | orchestrator | Saturday 11 April 2026 02:48:31 +0000 (0:00:00.223) 0:00:01.738 ******** 2026-04-11 02:48:38.200744 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.200755 | orchestrator | 2026-04-11 02:48:38.200766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.200777 | orchestrator | Saturday 11 April 2026 02:48:31 +0000 (0:00:00.222) 0:00:01.960 ******** 2026-04-11 02:48:38.200788 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.200799 | orchestrator | 2026-04-11 02:48:38.200810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.200822 | orchestrator | Saturday 11 April 2026 02:48:31 +0000 (0:00:00.215) 0:00:02.176 ******** 2026-04-11 02:48:38.200833 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.200844 | orchestrator | 2026-04-11 02:48:38.200856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.200867 | orchestrator | Saturday 11 April 2026 02:48:31 +0000 (0:00:00.206) 0:00:02.383 ******** 2026-04-11 02:48:38.200879 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.200890 | orchestrator | 2026-04-11 02:48:38.200901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.200912 | orchestrator | Saturday 11 April 2026 02:48:32 +0000 (0:00:00.262) 0:00:02.645 ******** 2026-04-11 02:48:38.200923 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.200934 | orchestrator | 2026-04-11 02:48:38.200945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.200956 | orchestrator | Saturday 11 April 2026 02:48:32 +0000 (0:00:00.221) 0:00:02.867 ******** 2026-04-11 02:48:38.200967 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.200978 | orchestrator | 2026-04-11 02:48:38.200989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.201001 | orchestrator | Saturday 11 April 2026 02:48:32 +0000 (0:00:00.234) 0:00:03.101 ******** 2026-04-11 02:48:38.201012 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.201023 | orchestrator | 2026-04-11 02:48:38.201033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.201044 | orchestrator | Saturday 11 April 2026 02:48:32 +0000 (0:00:00.237) 0:00:03.338 ******** 2026-04-11 02:48:38.201054 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a) 2026-04-11 02:48:38.201065 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a) 2026-04-11 02:48:38.201074 | orchestrator | 2026-04-11 02:48:38.201084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.201110 | orchestrator | Saturday 11 April 2026 02:48:33 +0000 (0:00:00.461) 0:00:03.800 ******** 2026-04-11 02:48:38.201121 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c) 2026-04-11 02:48:38.201131 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c) 2026-04-11 02:48:38.201140 | orchestrator | 2026-04-11 02:48:38.201150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.201159 | orchestrator | Saturday 11 April 2026 02:48:33 +0000 (0:00:00.693) 0:00:04.493 ******** 2026-04-11 02:48:38.201175 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898) 2026-04-11 02:48:38.201194 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898) 2026-04-11 02:48:38.201204 | orchestrator | 2026-04-11 02:48:38.201214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.201223 | orchestrator | Saturday 11 April 2026 02:48:34 +0000 (0:00:00.805) 0:00:05.298 ******** 2026-04-11 02:48:38.201233 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7) 2026-04-11 02:48:38.201243 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7) 2026-04-11 02:48:38.201252 | orchestrator | 2026-04-11 02:48:38.201262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:38.201271 | orchestrator | Saturday 11 April 2026 02:48:35 +0000 (0:00:01.015) 0:00:06.314 ******** 2026-04-11 02:48:38.201281 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 02:48:38.201290 | orchestrator | 2026-04-11 02:48:38.201300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:38.201310 | orchestrator | Saturday 11 April 2026 02:48:36 +0000 (0:00:00.384) 0:00:06.699 ******** 2026-04-11 02:48:38.201319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-11 02:48:38.201329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-11 02:48:38.201338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-11 02:48:38.201348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-11 02:48:38.201357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-11 02:48:38.201366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-11 02:48:38.201376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-11 02:48:38.201385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-11 02:48:38.201395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-11 02:48:38.201404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-11 02:48:38.201414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-11 02:48:38.201423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-11 02:48:38.201433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-11 02:48:38.201442 | orchestrator | 2026-04-11 02:48:38.201452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:38.201461 | orchestrator | Saturday 11 April 2026 02:48:36 +0000 (0:00:00.456) 0:00:07.155 ******** 2026-04-11 02:48:38.201471 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.201481 | orchestrator | 2026-04-11 02:48:38.201490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:38.201500 | orchestrator | Saturday 11 April 2026 02:48:36 +0000 (0:00:00.226) 0:00:07.382 ******** 2026-04-11 02:48:38.201510 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.201537 | orchestrator | 2026-04-11 02:48:38.201547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:38.201557 | orchestrator | Saturday 11 April 2026 02:48:37 +0000 (0:00:00.232) 0:00:07.615 ******** 2026-04-11 02:48:38.201566 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.201576 | orchestrator | 2026-04-11 02:48:38.201585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:38.201595 | orchestrator | Saturday 11 April 2026 02:48:37 +0000 (0:00:00.239) 0:00:07.854 ******** 2026-04-11 02:48:38.201612 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.201622 | orchestrator | 2026-04-11 02:48:38.201631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:38.201641 | orchestrator | Saturday 11 April 2026 02:48:37 +0000 (0:00:00.204) 0:00:08.058 ******** 2026-04-11 02:48:38.201651 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.201661 | orchestrator | 2026-04-11 02:48:38.201670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:38.201680 | orchestrator | Saturday 11 April 2026 02:48:37 +0000 (0:00:00.233) 0:00:08.292 ******** 2026-04-11 02:48:38.201689 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.201699 | orchestrator | 2026-04-11 02:48:38.201708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:38.201718 | orchestrator | Saturday 11 April 2026 02:48:37 +0000 (0:00:00.228) 0:00:08.521 ******** 2026-04-11 02:48:38.201728 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:38.201737 | orchestrator | 2026-04-11 02:48:38.201752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:46.627734 | orchestrator | Saturday 11 April 2026 02:48:38 +0000 (0:00:00.217) 0:00:08.738 ******** 2026-04-11 02:48:46.627856 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.627880 | orchestrator | 2026-04-11 02:48:46.627896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:46.627911 | orchestrator | Saturday 11 April 2026 02:48:38 +0000 (0:00:00.215) 0:00:08.953 ******** 2026-04-11 02:48:46.627925 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-11 02:48:46.627934 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-11 02:48:46.627959 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-11 02:48:46.627967 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-11 02:48:46.627975 | orchestrator | 2026-04-11 02:48:46.627983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:46.627991 | orchestrator | Saturday 11 April 2026 02:48:39 +0000 (0:00:01.207) 0:00:10.161 ******** 2026-04-11 02:48:46.627999 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628007 | orchestrator | 2026-04-11 02:48:46.628015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:46.628023 | orchestrator | Saturday 11 April 2026 02:48:39 +0000 (0:00:00.224) 0:00:10.386 ******** 2026-04-11 02:48:46.628031 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628039 | orchestrator | 2026-04-11 02:48:46.628046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:46.628054 | orchestrator | Saturday 11 April 2026 02:48:40 +0000 (0:00:00.227) 0:00:10.613 ******** 2026-04-11 02:48:46.628062 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628070 | orchestrator | 2026-04-11 02:48:46.628078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:46.628086 | orchestrator | Saturday 11 April 2026 02:48:40 +0000 (0:00:00.224) 0:00:10.837 ******** 2026-04-11 02:48:46.628093 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628102 | orchestrator | 2026-04-11 02:48:46.628109 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-11 02:48:46.628117 | orchestrator | Saturday 11 April 2026 02:48:40 +0000 (0:00:00.241) 0:00:11.079 ******** 2026-04-11 02:48:46.628125 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-11 02:48:46.628133 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-11 02:48:46.628141 | orchestrator | 2026-04-11 02:48:46.628148 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-11 02:48:46.628156 | orchestrator | Saturday 11 April 2026 02:48:40 +0000 (0:00:00.195) 0:00:11.274 ******** 2026-04-11 02:48:46.628164 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628174 | orchestrator | 2026-04-11 02:48:46.628183 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-11 02:48:46.628191 | orchestrator | Saturday 11 April 2026 02:48:40 +0000 (0:00:00.153) 0:00:11.427 ******** 2026-04-11 02:48:46.628223 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628233 | orchestrator | 2026-04-11 02:48:46.628242 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-11 02:48:46.628252 | orchestrator | Saturday 11 April 2026 02:48:41 +0000 (0:00:00.152) 0:00:11.579 ******** 2026-04-11 02:48:46.628261 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628268 | orchestrator | 2026-04-11 02:48:46.628276 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-11 02:48:46.628284 | orchestrator | Saturday 11 April 2026 02:48:41 +0000 (0:00:00.134) 0:00:11.714 ******** 2026-04-11 02:48:46.628292 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:48:46.628300 | orchestrator | 2026-04-11 02:48:46.628307 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-11 02:48:46.628315 | orchestrator | Saturday 11 April 2026 02:48:41 +0000 (0:00:00.151) 0:00:11.865 ******** 2026-04-11 02:48:46.628323 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5955808-db0e-564c-b1b7-e2d336084003'}}) 2026-04-11 02:48:46.628332 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}}) 2026-04-11 02:48:46.628339 | orchestrator | 2026-04-11 02:48:46.628347 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-11 02:48:46.628355 | orchestrator | Saturday 11 April 2026 02:48:41 +0000 (0:00:00.177) 0:00:12.042 ******** 2026-04-11 02:48:46.628364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5955808-db0e-564c-b1b7-e2d336084003'}})  2026-04-11 02:48:46.628373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}})  2026-04-11 02:48:46.628381 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628389 | orchestrator | 2026-04-11 02:48:46.628396 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-11 02:48:46.628404 | orchestrator | Saturday 11 April 2026 02:48:41 +0000 (0:00:00.402) 0:00:12.445 ******** 2026-04-11 02:48:46.628412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5955808-db0e-564c-b1b7-e2d336084003'}})  2026-04-11 02:48:46.628420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}})  2026-04-11 02:48:46.628428 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628435 | orchestrator | 2026-04-11 02:48:46.628443 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-11 02:48:46.628451 | orchestrator | Saturday 11 April 2026 02:48:42 +0000 (0:00:00.196) 0:00:12.642 ******** 2026-04-11 02:48:46.628458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5955808-db0e-564c-b1b7-e2d336084003'}})  2026-04-11 02:48:46.628483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}})  2026-04-11 02:48:46.628491 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628499 | orchestrator | 2026-04-11 02:48:46.628507 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-11 02:48:46.628515 | orchestrator | Saturday 11 April 2026 02:48:42 +0000 (0:00:00.159) 0:00:12.801 ******** 2026-04-11 02:48:46.628523 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:48:46.628565 | orchestrator | 2026-04-11 02:48:46.628574 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-11 02:48:46.628587 | orchestrator | Saturday 11 April 2026 02:48:42 +0000 (0:00:00.172) 0:00:12.974 ******** 2026-04-11 02:48:46.628595 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:48:46.628603 | orchestrator | 2026-04-11 02:48:46.628611 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-11 02:48:46.628619 | orchestrator | Saturday 11 April 2026 02:48:42 +0000 (0:00:00.168) 0:00:13.142 ******** 2026-04-11 02:48:46.628634 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628642 | orchestrator | 2026-04-11 02:48:46.628650 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-11 02:48:46.628658 | orchestrator | Saturday 11 April 2026 02:48:42 +0000 (0:00:00.149) 0:00:13.292 ******** 2026-04-11 02:48:46.628666 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628674 | orchestrator | 2026-04-11 02:48:46.628682 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-11 02:48:46.628690 | orchestrator | Saturday 11 April 2026 02:48:42 +0000 (0:00:00.152) 0:00:13.445 ******** 2026-04-11 02:48:46.628698 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628706 | orchestrator | 2026-04-11 02:48:46.628714 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-11 02:48:46.628722 | orchestrator | Saturday 11 April 2026 02:48:43 +0000 (0:00:00.157) 0:00:13.602 ******** 2026-04-11 02:48:46.628730 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 02:48:46.628738 | orchestrator |  "ceph_osd_devices": { 2026-04-11 02:48:46.628746 | orchestrator |  "sdb": { 2026-04-11 02:48:46.628755 | orchestrator |  "osd_lvm_uuid": "c5955808-db0e-564c-b1b7-e2d336084003" 2026-04-11 02:48:46.628763 | orchestrator |  }, 2026-04-11 02:48:46.628772 | orchestrator |  "sdc": { 2026-04-11 02:48:46.628780 | orchestrator |  "osd_lvm_uuid": "6808ea3d-3e7e-5ef0-9dd2-f9487250f200" 2026-04-11 02:48:46.628788 | orchestrator |  } 2026-04-11 02:48:46.628796 | orchestrator |  } 2026-04-11 02:48:46.628805 | orchestrator | } 2026-04-11 02:48:46.628813 | orchestrator | 2026-04-11 02:48:46.628821 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-11 02:48:46.628829 | orchestrator | Saturday 11 April 2026 02:48:43 +0000 (0:00:00.156) 0:00:13.759 ******** 2026-04-11 02:48:46.628837 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628845 | orchestrator | 2026-04-11 02:48:46.628853 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-11 02:48:46.628861 | orchestrator | Saturday 11 April 2026 02:48:43 +0000 (0:00:00.166) 0:00:13.926 ******** 2026-04-11 02:48:46.628869 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628877 | orchestrator | 2026-04-11 02:48:46.628885 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-11 02:48:46.628893 | orchestrator | Saturday 11 April 2026 02:48:43 +0000 (0:00:00.134) 0:00:14.060 ******** 2026-04-11 02:48:46.628901 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:48:46.628909 | orchestrator | 2026-04-11 02:48:46.628917 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-11 02:48:46.628926 | orchestrator | Saturday 11 April 2026 02:48:43 +0000 (0:00:00.152) 0:00:14.213 ******** 2026-04-11 02:48:46.628933 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 02:48:46.628942 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-11 02:48:46.628950 | orchestrator |  "ceph_osd_devices": { 2026-04-11 02:48:46.628958 | orchestrator |  "sdb": { 2026-04-11 02:48:46.628966 | orchestrator |  "osd_lvm_uuid": "c5955808-db0e-564c-b1b7-e2d336084003" 2026-04-11 02:48:46.628974 | orchestrator |  }, 2026-04-11 02:48:46.628982 | orchestrator |  "sdc": { 2026-04-11 02:48:46.628990 | orchestrator |  "osd_lvm_uuid": "6808ea3d-3e7e-5ef0-9dd2-f9487250f200" 2026-04-11 02:48:46.628998 | orchestrator |  } 2026-04-11 02:48:46.629006 | orchestrator |  }, 2026-04-11 02:48:46.629014 | orchestrator |  "lvm_volumes": [ 2026-04-11 02:48:46.629022 | orchestrator |  { 2026-04-11 02:48:46.629030 | orchestrator |  "data": "osd-block-c5955808-db0e-564c-b1b7-e2d336084003", 2026-04-11 02:48:46.629039 | orchestrator |  "data_vg": "ceph-c5955808-db0e-564c-b1b7-e2d336084003" 2026-04-11 02:48:46.629047 | orchestrator |  }, 2026-04-11 02:48:46.629054 | orchestrator |  { 2026-04-11 02:48:46.629062 | orchestrator |  "data": "osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200", 2026-04-11 02:48:46.629076 | orchestrator |  "data_vg": "ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200" 2026-04-11 02:48:46.629084 | orchestrator |  } 2026-04-11 02:48:46.629092 | orchestrator |  ] 2026-04-11 02:48:46.629100 | orchestrator |  } 2026-04-11 02:48:46.629108 | orchestrator | } 2026-04-11 02:48:46.629117 | orchestrator | 2026-04-11 02:48:46.629124 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-11 02:48:46.629133 | orchestrator | Saturday 11 April 2026 02:48:44 +0000 (0:00:00.479) 0:00:14.693 ******** 2026-04-11 02:48:46.629141 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 02:48:46.629148 | orchestrator | 2026-04-11 02:48:46.629156 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-11 02:48:46.629165 | orchestrator | 2026-04-11 02:48:46.629173 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 02:48:46.629181 | orchestrator | Saturday 11 April 2026 02:48:46 +0000 (0:00:01.924) 0:00:16.617 ******** 2026-04-11 02:48:46.629189 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-11 02:48:46.629197 | orchestrator | 2026-04-11 02:48:46.629205 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 02:48:46.629213 | orchestrator | Saturday 11 April 2026 02:48:46 +0000 (0:00:00.284) 0:00:16.902 ******** 2026-04-11 02:48:46.629221 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:48:46.629229 | orchestrator | 2026-04-11 02:48:46.629242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.618910 | orchestrator | Saturday 11 April 2026 02:48:46 +0000 (0:00:00.269) 0:00:17.172 ******** 2026-04-11 02:48:55.619014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-11 02:48:55.619026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-11 02:48:55.619032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-11 02:48:55.619050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-11 02:48:55.619056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-11 02:48:55.619062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-11 02:48:55.619068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-11 02:48:55.619073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-11 02:48:55.619079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-11 02:48:55.619085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-11 02:48:55.619090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-11 02:48:55.619096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-11 02:48:55.619101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-11 02:48:55.619107 | orchestrator | 2026-04-11 02:48:55.619113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619119 | orchestrator | Saturday 11 April 2026 02:48:47 +0000 (0:00:00.444) 0:00:17.617 ******** 2026-04-11 02:48:55.619125 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619131 | orchestrator | 2026-04-11 02:48:55.619137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619142 | orchestrator | Saturday 11 April 2026 02:48:47 +0000 (0:00:00.221) 0:00:17.838 ******** 2026-04-11 02:48:55.619148 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619153 | orchestrator | 2026-04-11 02:48:55.619159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619165 | orchestrator | Saturday 11 April 2026 02:48:47 +0000 (0:00:00.216) 0:00:18.055 ******** 2026-04-11 02:48:55.619188 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619193 | orchestrator | 2026-04-11 02:48:55.619199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619205 | orchestrator | Saturday 11 April 2026 02:48:47 +0000 (0:00:00.205) 0:00:18.261 ******** 2026-04-11 02:48:55.619210 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619215 | orchestrator | 2026-04-11 02:48:55.619221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619226 | orchestrator | Saturday 11 April 2026 02:48:48 +0000 (0:00:00.705) 0:00:18.966 ******** 2026-04-11 02:48:55.619232 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619237 | orchestrator | 2026-04-11 02:48:55.619243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619248 | orchestrator | Saturday 11 April 2026 02:48:48 +0000 (0:00:00.235) 0:00:19.202 ******** 2026-04-11 02:48:55.619254 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619259 | orchestrator | 2026-04-11 02:48:55.619264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619270 | orchestrator | Saturday 11 April 2026 02:48:48 +0000 (0:00:00.230) 0:00:19.433 ******** 2026-04-11 02:48:55.619275 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619281 | orchestrator | 2026-04-11 02:48:55.619286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619292 | orchestrator | Saturday 11 April 2026 02:48:49 +0000 (0:00:00.213) 0:00:19.646 ******** 2026-04-11 02:48:55.619297 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619303 | orchestrator | 2026-04-11 02:48:55.619308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619314 | orchestrator | Saturday 11 April 2026 02:48:49 +0000 (0:00:00.211) 0:00:19.857 ******** 2026-04-11 02:48:55.619319 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4) 2026-04-11 02:48:55.619325 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4) 2026-04-11 02:48:55.619331 | orchestrator | 2026-04-11 02:48:55.619337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619346 | orchestrator | Saturday 11 April 2026 02:48:49 +0000 (0:00:00.442) 0:00:20.300 ******** 2026-04-11 02:48:55.619355 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb) 2026-04-11 02:48:55.619364 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb) 2026-04-11 02:48:55.619373 | orchestrator | 2026-04-11 02:48:55.619382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619390 | orchestrator | Saturday 11 April 2026 02:48:50 +0000 (0:00:00.466) 0:00:20.767 ******** 2026-04-11 02:48:55.619399 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f) 2026-04-11 02:48:55.619408 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f) 2026-04-11 02:48:55.619417 | orchestrator | 2026-04-11 02:48:55.619426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619452 | orchestrator | Saturday 11 April 2026 02:48:50 +0000 (0:00:00.496) 0:00:21.264 ******** 2026-04-11 02:48:55.619462 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac) 2026-04-11 02:48:55.619471 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac) 2026-04-11 02:48:55.619478 | orchestrator | 2026-04-11 02:48:55.619484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:48:55.619496 | orchestrator | Saturday 11 April 2026 02:48:51 +0000 (0:00:00.485) 0:00:21.749 ******** 2026-04-11 02:48:55.619502 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 02:48:55.619515 | orchestrator | 2026-04-11 02:48:55.619522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619528 | orchestrator | Saturday 11 April 2026 02:48:51 +0000 (0:00:00.393) 0:00:22.142 ******** 2026-04-11 02:48:55.619534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-11 02:48:55.619541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-11 02:48:55.619573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-11 02:48:55.619580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-11 02:48:55.619586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-11 02:48:55.619593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-11 02:48:55.619599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-11 02:48:55.619605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-11 02:48:55.619611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-11 02:48:55.619617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-11 02:48:55.619624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-11 02:48:55.619630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-11 02:48:55.619636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-11 02:48:55.619642 | orchestrator | 2026-04-11 02:48:55.619649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619655 | orchestrator | Saturday 11 April 2026 02:48:52 +0000 (0:00:00.429) 0:00:22.572 ******** 2026-04-11 02:48:55.619665 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619674 | orchestrator | 2026-04-11 02:48:55.619683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619693 | orchestrator | Saturday 11 April 2026 02:48:52 +0000 (0:00:00.753) 0:00:23.326 ******** 2026-04-11 02:48:55.619703 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619714 | orchestrator | 2026-04-11 02:48:55.619724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619733 | orchestrator | Saturday 11 April 2026 02:48:53 +0000 (0:00:00.228) 0:00:23.555 ******** 2026-04-11 02:48:55.619743 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619752 | orchestrator | 2026-04-11 02:48:55.619763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619774 | orchestrator | Saturday 11 April 2026 02:48:53 +0000 (0:00:00.232) 0:00:23.787 ******** 2026-04-11 02:48:55.619784 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619791 | orchestrator | 2026-04-11 02:48:55.619796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619802 | orchestrator | Saturday 11 April 2026 02:48:53 +0000 (0:00:00.241) 0:00:24.029 ******** 2026-04-11 02:48:55.619807 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619813 | orchestrator | 2026-04-11 02:48:55.619818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619824 | orchestrator | Saturday 11 April 2026 02:48:53 +0000 (0:00:00.211) 0:00:24.241 ******** 2026-04-11 02:48:55.619833 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619842 | orchestrator | 2026-04-11 02:48:55.619850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619859 | orchestrator | Saturday 11 April 2026 02:48:53 +0000 (0:00:00.223) 0:00:24.465 ******** 2026-04-11 02:48:55.619874 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619883 | orchestrator | 2026-04-11 02:48:55.619892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619901 | orchestrator | Saturday 11 April 2026 02:48:54 +0000 (0:00:00.245) 0:00:24.710 ******** 2026-04-11 02:48:55.619911 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:48:55.619920 | orchestrator | 2026-04-11 02:48:55.619926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619932 | orchestrator | Saturday 11 April 2026 02:48:54 +0000 (0:00:00.206) 0:00:24.917 ******** 2026-04-11 02:48:55.619937 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-11 02:48:55.619943 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-11 02:48:55.619949 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-11 02:48:55.619954 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-11 02:48:55.619960 | orchestrator | 2026-04-11 02:48:55.619965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:48:55.619971 | orchestrator | Saturday 11 April 2026 02:48:55 +0000 (0:00:01.038) 0:00:25.955 ******** 2026-04-11 02:48:55.619976 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.809686 | orchestrator | 2026-04-11 02:49:02.809780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:02.809794 | orchestrator | Saturday 11 April 2026 02:48:55 +0000 (0:00:00.208) 0:00:26.164 ******** 2026-04-11 02:49:02.809804 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.809813 | orchestrator | 2026-04-11 02:49:02.809822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:02.809831 | orchestrator | Saturday 11 April 2026 02:48:55 +0000 (0:00:00.219) 0:00:26.384 ******** 2026-04-11 02:49:02.809852 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.809861 | orchestrator | 2026-04-11 02:49:02.809869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:02.809878 | orchestrator | Saturday 11 April 2026 02:48:56 +0000 (0:00:00.785) 0:00:27.169 ******** 2026-04-11 02:49:02.809886 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.809894 | orchestrator | 2026-04-11 02:49:02.809902 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-11 02:49:02.809910 | orchestrator | Saturday 11 April 2026 02:48:56 +0000 (0:00:00.237) 0:00:27.407 ******** 2026-04-11 02:49:02.809918 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-11 02:49:02.809926 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-11 02:49:02.809935 | orchestrator | 2026-04-11 02:49:02.809943 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-11 02:49:02.809951 | orchestrator | Saturday 11 April 2026 02:48:57 +0000 (0:00:00.216) 0:00:27.623 ******** 2026-04-11 02:49:02.809959 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.809967 | orchestrator | 2026-04-11 02:49:02.809975 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-11 02:49:02.809983 | orchestrator | Saturday 11 April 2026 02:48:57 +0000 (0:00:00.132) 0:00:27.756 ******** 2026-04-11 02:49:02.809991 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.809999 | orchestrator | 2026-04-11 02:49:02.810007 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-11 02:49:02.810062 | orchestrator | Saturday 11 April 2026 02:48:57 +0000 (0:00:00.133) 0:00:27.890 ******** 2026-04-11 02:49:02.810072 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810080 | orchestrator | 2026-04-11 02:49:02.810088 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-11 02:49:02.810096 | orchestrator | Saturday 11 April 2026 02:48:57 +0000 (0:00:00.152) 0:00:28.042 ******** 2026-04-11 02:49:02.810104 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:49:02.810114 | orchestrator | 2026-04-11 02:49:02.810122 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-11 02:49:02.810130 | orchestrator | Saturday 11 April 2026 02:48:57 +0000 (0:00:00.175) 0:00:28.218 ******** 2026-04-11 02:49:02.810159 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4afe3055-abd0-5615-b44c-a776d8127855'}}) 2026-04-11 02:49:02.810170 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c2bdb62-89ba-5856-b2e0-5db351397ca2'}}) 2026-04-11 02:49:02.810180 | orchestrator | 2026-04-11 02:49:02.810190 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-11 02:49:02.810200 | orchestrator | Saturday 11 April 2026 02:48:57 +0000 (0:00:00.211) 0:00:28.429 ******** 2026-04-11 02:49:02.810210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4afe3055-abd0-5615-b44c-a776d8127855'}})  2026-04-11 02:49:02.810221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c2bdb62-89ba-5856-b2e0-5db351397ca2'}})  2026-04-11 02:49:02.810230 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810240 | orchestrator | 2026-04-11 02:49:02.810250 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-11 02:49:02.810259 | orchestrator | Saturday 11 April 2026 02:48:58 +0000 (0:00:00.163) 0:00:28.593 ******** 2026-04-11 02:49:02.810268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4afe3055-abd0-5615-b44c-a776d8127855'}})  2026-04-11 02:49:02.810278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c2bdb62-89ba-5856-b2e0-5db351397ca2'}})  2026-04-11 02:49:02.810287 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810296 | orchestrator | 2026-04-11 02:49:02.810306 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-11 02:49:02.810315 | orchestrator | Saturday 11 April 2026 02:48:58 +0000 (0:00:00.173) 0:00:28.767 ******** 2026-04-11 02:49:02.810324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4afe3055-abd0-5615-b44c-a776d8127855'}})  2026-04-11 02:49:02.810334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c2bdb62-89ba-5856-b2e0-5db351397ca2'}})  2026-04-11 02:49:02.810343 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810353 | orchestrator | 2026-04-11 02:49:02.810367 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-11 02:49:02.810381 | orchestrator | Saturday 11 April 2026 02:48:58 +0000 (0:00:00.170) 0:00:28.937 ******** 2026-04-11 02:49:02.810394 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:49:02.810408 | orchestrator | 2026-04-11 02:49:02.810420 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-11 02:49:02.810433 | orchestrator | Saturday 11 April 2026 02:48:58 +0000 (0:00:00.141) 0:00:29.078 ******** 2026-04-11 02:49:02.810447 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:49:02.810460 | orchestrator | 2026-04-11 02:49:02.810473 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-11 02:49:02.810488 | orchestrator | Saturday 11 April 2026 02:48:58 +0000 (0:00:00.163) 0:00:29.241 ******** 2026-04-11 02:49:02.810510 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810519 | orchestrator | 2026-04-11 02:49:02.810527 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-11 02:49:02.810535 | orchestrator | Saturday 11 April 2026 02:48:59 +0000 (0:00:00.426) 0:00:29.667 ******** 2026-04-11 02:49:02.810543 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810550 | orchestrator | 2026-04-11 02:49:02.810583 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-11 02:49:02.810592 | orchestrator | Saturday 11 April 2026 02:48:59 +0000 (0:00:00.141) 0:00:29.808 ******** 2026-04-11 02:49:02.810605 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810613 | orchestrator | 2026-04-11 02:49:02.810621 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-11 02:49:02.810629 | orchestrator | Saturday 11 April 2026 02:48:59 +0000 (0:00:00.163) 0:00:29.972 ******** 2026-04-11 02:49:02.810645 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 02:49:02.810653 | orchestrator |  "ceph_osd_devices": { 2026-04-11 02:49:02.810661 | orchestrator |  "sdb": { 2026-04-11 02:49:02.810670 | orchestrator |  "osd_lvm_uuid": "4afe3055-abd0-5615-b44c-a776d8127855" 2026-04-11 02:49:02.810678 | orchestrator |  }, 2026-04-11 02:49:02.810686 | orchestrator |  "sdc": { 2026-04-11 02:49:02.810694 | orchestrator |  "osd_lvm_uuid": "1c2bdb62-89ba-5856-b2e0-5db351397ca2" 2026-04-11 02:49:02.810702 | orchestrator |  } 2026-04-11 02:49:02.810710 | orchestrator |  } 2026-04-11 02:49:02.810718 | orchestrator | } 2026-04-11 02:49:02.810727 | orchestrator | 2026-04-11 02:49:02.810738 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-11 02:49:02.810752 | orchestrator | Saturday 11 April 2026 02:48:59 +0000 (0:00:00.173) 0:00:30.146 ******** 2026-04-11 02:49:02.810767 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810787 | orchestrator | 2026-04-11 02:49:02.810802 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-11 02:49:02.810816 | orchestrator | Saturday 11 April 2026 02:48:59 +0000 (0:00:00.157) 0:00:30.303 ******** 2026-04-11 02:49:02.810829 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810842 | orchestrator | 2026-04-11 02:49:02.810856 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-11 02:49:02.810867 | orchestrator | Saturday 11 April 2026 02:48:59 +0000 (0:00:00.161) 0:00:30.465 ******** 2026-04-11 02:49:02.810881 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:49:02.810895 | orchestrator | 2026-04-11 02:49:02.810908 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-11 02:49:02.810923 | orchestrator | Saturday 11 April 2026 02:49:00 +0000 (0:00:00.168) 0:00:30.634 ******** 2026-04-11 02:49:02.810937 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 02:49:02.810952 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-11 02:49:02.810967 | orchestrator |  "ceph_osd_devices": { 2026-04-11 02:49:02.810977 | orchestrator |  "sdb": { 2026-04-11 02:49:02.810985 | orchestrator |  "osd_lvm_uuid": "4afe3055-abd0-5615-b44c-a776d8127855" 2026-04-11 02:49:02.810993 | orchestrator |  }, 2026-04-11 02:49:02.811001 | orchestrator |  "sdc": { 2026-04-11 02:49:02.811021 | orchestrator |  "osd_lvm_uuid": "1c2bdb62-89ba-5856-b2e0-5db351397ca2" 2026-04-11 02:49:02.811029 | orchestrator |  } 2026-04-11 02:49:02.811046 | orchestrator |  }, 2026-04-11 02:49:02.811055 | orchestrator |  "lvm_volumes": [ 2026-04-11 02:49:02.811063 | orchestrator |  { 2026-04-11 02:49:02.811071 | orchestrator |  "data": "osd-block-4afe3055-abd0-5615-b44c-a776d8127855", 2026-04-11 02:49:02.811079 | orchestrator |  "data_vg": "ceph-4afe3055-abd0-5615-b44c-a776d8127855" 2026-04-11 02:49:02.811087 | orchestrator |  }, 2026-04-11 02:49:02.811095 | orchestrator |  { 2026-04-11 02:49:02.811103 | orchestrator |  "data": "osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2", 2026-04-11 02:49:02.811111 | orchestrator |  "data_vg": "ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2" 2026-04-11 02:49:02.811119 | orchestrator |  } 2026-04-11 02:49:02.811127 | orchestrator |  ] 2026-04-11 02:49:02.811135 | orchestrator |  } 2026-04-11 02:49:02.811143 | orchestrator | } 2026-04-11 02:49:02.811151 | orchestrator | 2026-04-11 02:49:02.811159 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-11 02:49:02.811167 | orchestrator | Saturday 11 April 2026 02:49:00 +0000 (0:00:00.250) 0:00:30.884 ******** 2026-04-11 02:49:02.811175 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-11 02:49:02.811183 | orchestrator | 2026-04-11 02:49:02.811191 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-11 02:49:02.811199 | orchestrator | 2026-04-11 02:49:02.811207 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 02:49:02.811223 | orchestrator | Saturday 11 April 2026 02:49:01 +0000 (0:00:01.491) 0:00:32.376 ******** 2026-04-11 02:49:02.811231 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-11 02:49:02.811239 | orchestrator | 2026-04-11 02:49:02.811247 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 02:49:02.811255 | orchestrator | Saturday 11 April 2026 02:49:02 +0000 (0:00:00.287) 0:00:32.664 ******** 2026-04-11 02:49:02.811263 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:49:02.811271 | orchestrator | 2026-04-11 02:49:02.811279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:02.811287 | orchestrator | Saturday 11 April 2026 02:49:02 +0000 (0:00:00.269) 0:00:32.933 ******** 2026-04-11 02:49:02.811295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-11 02:49:02.811303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-11 02:49:02.811311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-11 02:49:02.811319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-11 02:49:02.811327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-11 02:49:02.811343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-11 02:49:12.203887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-11 02:49:12.203960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-11 02:49:12.203967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-11 02:49:12.203983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-11 02:49:12.203988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-11 02:49:12.203993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-11 02:49:12.203997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-11 02:49:12.204002 | orchestrator | 2026-04-11 02:49:12.204008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204013 | orchestrator | Saturday 11 April 2026 02:49:02 +0000 (0:00:00.418) 0:00:33.351 ******** 2026-04-11 02:49:12.204018 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204024 | orchestrator | 2026-04-11 02:49:12.204028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204032 | orchestrator | Saturday 11 April 2026 02:49:03 +0000 (0:00:00.233) 0:00:33.585 ******** 2026-04-11 02:49:12.204037 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204041 | orchestrator | 2026-04-11 02:49:12.204046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204050 | orchestrator | Saturday 11 April 2026 02:49:03 +0000 (0:00:00.223) 0:00:33.808 ******** 2026-04-11 02:49:12.204054 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204059 | orchestrator | 2026-04-11 02:49:12.204066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204073 | orchestrator | Saturday 11 April 2026 02:49:03 +0000 (0:00:00.231) 0:00:34.039 ******** 2026-04-11 02:49:12.204080 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204087 | orchestrator | 2026-04-11 02:49:12.204098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204108 | orchestrator | Saturday 11 April 2026 02:49:03 +0000 (0:00:00.221) 0:00:34.261 ******** 2026-04-11 02:49:12.204115 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204122 | orchestrator | 2026-04-11 02:49:12.204129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204136 | orchestrator | Saturday 11 April 2026 02:49:03 +0000 (0:00:00.234) 0:00:34.495 ******** 2026-04-11 02:49:12.204164 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204171 | orchestrator | 2026-04-11 02:49:12.204179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204187 | orchestrator | Saturday 11 April 2026 02:49:04 +0000 (0:00:00.231) 0:00:34.727 ******** 2026-04-11 02:49:12.204195 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204203 | orchestrator | 2026-04-11 02:49:12.204210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204218 | orchestrator | Saturday 11 April 2026 02:49:04 +0000 (0:00:00.740) 0:00:35.467 ******** 2026-04-11 02:49:12.204226 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204233 | orchestrator | 2026-04-11 02:49:12.204241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204248 | orchestrator | Saturday 11 April 2026 02:49:05 +0000 (0:00:00.241) 0:00:35.709 ******** 2026-04-11 02:49:12.204256 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20) 2026-04-11 02:49:12.204265 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20) 2026-04-11 02:49:12.204273 | orchestrator | 2026-04-11 02:49:12.204281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204289 | orchestrator | Saturday 11 April 2026 02:49:05 +0000 (0:00:00.486) 0:00:36.195 ******** 2026-04-11 02:49:12.204296 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3) 2026-04-11 02:49:12.204304 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3) 2026-04-11 02:49:12.204311 | orchestrator | 2026-04-11 02:49:12.204319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204327 | orchestrator | Saturday 11 April 2026 02:49:06 +0000 (0:00:00.515) 0:00:36.711 ******** 2026-04-11 02:49:12.204334 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78) 2026-04-11 02:49:12.204341 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78) 2026-04-11 02:49:12.204349 | orchestrator | 2026-04-11 02:49:12.204357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204364 | orchestrator | Saturday 11 April 2026 02:49:06 +0000 (0:00:00.512) 0:00:37.224 ******** 2026-04-11 02:49:12.204373 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735) 2026-04-11 02:49:12.204380 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735) 2026-04-11 02:49:12.204388 | orchestrator | 2026-04-11 02:49:12.204396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:49:12.204404 | orchestrator | Saturday 11 April 2026 02:49:07 +0000 (0:00:00.508) 0:00:37.732 ******** 2026-04-11 02:49:12.204411 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 02:49:12.204419 | orchestrator | 2026-04-11 02:49:12.204426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204448 | orchestrator | Saturday 11 April 2026 02:49:07 +0000 (0:00:00.377) 0:00:38.110 ******** 2026-04-11 02:49:12.204456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-11 02:49:12.204464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-11 02:49:12.204472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-11 02:49:12.204484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-11 02:49:12.204492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-11 02:49:12.204500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-11 02:49:12.204517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-11 02:49:12.204524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-11 02:49:12.204533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-11 02:49:12.204540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-11 02:49:12.204548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-11 02:49:12.204556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-11 02:49:12.204564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-11 02:49:12.204597 | orchestrator | 2026-04-11 02:49:12.204606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204614 | orchestrator | Saturday 11 April 2026 02:49:07 +0000 (0:00:00.436) 0:00:38.547 ******** 2026-04-11 02:49:12.204622 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204630 | orchestrator | 2026-04-11 02:49:12.204639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204647 | orchestrator | Saturday 11 April 2026 02:49:08 +0000 (0:00:00.225) 0:00:38.773 ******** 2026-04-11 02:49:12.204655 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204663 | orchestrator | 2026-04-11 02:49:12.204671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204679 | orchestrator | Saturday 11 April 2026 02:49:08 +0000 (0:00:00.226) 0:00:39.000 ******** 2026-04-11 02:49:12.204687 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204694 | orchestrator | 2026-04-11 02:49:12.204701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204709 | orchestrator | Saturday 11 April 2026 02:49:09 +0000 (0:00:00.783) 0:00:39.783 ******** 2026-04-11 02:49:12.204716 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204723 | orchestrator | 2026-04-11 02:49:12.204731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204738 | orchestrator | Saturday 11 April 2026 02:49:09 +0000 (0:00:00.267) 0:00:40.051 ******** 2026-04-11 02:49:12.204745 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204752 | orchestrator | 2026-04-11 02:49:12.204759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204767 | orchestrator | Saturday 11 April 2026 02:49:09 +0000 (0:00:00.269) 0:00:40.321 ******** 2026-04-11 02:49:12.204775 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204782 | orchestrator | 2026-04-11 02:49:12.204790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204798 | orchestrator | Saturday 11 April 2026 02:49:09 +0000 (0:00:00.231) 0:00:40.552 ******** 2026-04-11 02:49:12.204806 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204814 | orchestrator | 2026-04-11 02:49:12.204822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204830 | orchestrator | Saturday 11 April 2026 02:49:10 +0000 (0:00:00.224) 0:00:40.777 ******** 2026-04-11 02:49:12.204837 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204845 | orchestrator | 2026-04-11 02:49:12.204852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204860 | orchestrator | Saturday 11 April 2026 02:49:10 +0000 (0:00:00.244) 0:00:41.021 ******** 2026-04-11 02:49:12.204867 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-11 02:49:12.204875 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-11 02:49:12.204882 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-11 02:49:12.204890 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-11 02:49:12.204898 | orchestrator | 2026-04-11 02:49:12.204912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204920 | orchestrator | Saturday 11 April 2026 02:49:11 +0000 (0:00:00.748) 0:00:41.770 ******** 2026-04-11 02:49:12.204927 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204935 | orchestrator | 2026-04-11 02:49:12.204943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204951 | orchestrator | Saturday 11 April 2026 02:49:11 +0000 (0:00:00.224) 0:00:41.994 ******** 2026-04-11 02:49:12.204958 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204966 | orchestrator | 2026-04-11 02:49:12.204973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.204981 | orchestrator | Saturday 11 April 2026 02:49:11 +0000 (0:00:00.245) 0:00:42.239 ******** 2026-04-11 02:49:12.204987 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.204994 | orchestrator | 2026-04-11 02:49:12.205001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:49:12.205008 | orchestrator | Saturday 11 April 2026 02:49:11 +0000 (0:00:00.239) 0:00:42.479 ******** 2026-04-11 02:49:12.205015 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:12.205022 | orchestrator | 2026-04-11 02:49:12.205037 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-11 02:49:17.077075 | orchestrator | Saturday 11 April 2026 02:49:12 +0000 (0:00:00.270) 0:00:42.750 ******** 2026-04-11 02:49:17.077169 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-11 02:49:17.077181 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-11 02:49:17.077191 | orchestrator | 2026-04-11 02:49:17.077201 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-11 02:49:17.077227 | orchestrator | Saturday 11 April 2026 02:49:12 +0000 (0:00:00.424) 0:00:43.175 ******** 2026-04-11 02:49:17.077236 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077245 | orchestrator | 2026-04-11 02:49:17.077253 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-11 02:49:17.077261 | orchestrator | Saturday 11 April 2026 02:49:12 +0000 (0:00:00.133) 0:00:43.308 ******** 2026-04-11 02:49:17.077270 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077279 | orchestrator | 2026-04-11 02:49:17.077287 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-11 02:49:17.077295 | orchestrator | Saturday 11 April 2026 02:49:12 +0000 (0:00:00.141) 0:00:43.450 ******** 2026-04-11 02:49:17.077302 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077310 | orchestrator | 2026-04-11 02:49:17.077319 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-11 02:49:17.077327 | orchestrator | Saturday 11 April 2026 02:49:13 +0000 (0:00:00.147) 0:00:43.597 ******** 2026-04-11 02:49:17.077335 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:49:17.077346 | orchestrator | 2026-04-11 02:49:17.077354 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-11 02:49:17.077363 | orchestrator | Saturday 11 April 2026 02:49:13 +0000 (0:00:00.157) 0:00:43.754 ******** 2026-04-11 02:49:17.077371 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e8a3f20d-ed3f-5f34-b319-d0862efd8412'}}) 2026-04-11 02:49:17.077381 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a718c651-a264-5d59-a3a1-3dddb23bb056'}}) 2026-04-11 02:49:17.077390 | orchestrator | 2026-04-11 02:49:17.077398 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-11 02:49:17.077407 | orchestrator | Saturday 11 April 2026 02:49:13 +0000 (0:00:00.194) 0:00:43.949 ******** 2026-04-11 02:49:17.077417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e8a3f20d-ed3f-5f34-b319-d0862efd8412'}})  2026-04-11 02:49:17.077427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a718c651-a264-5d59-a3a1-3dddb23bb056'}})  2026-04-11 02:49:17.077458 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077467 | orchestrator | 2026-04-11 02:49:17.077476 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-11 02:49:17.077484 | orchestrator | Saturday 11 April 2026 02:49:13 +0000 (0:00:00.184) 0:00:44.134 ******** 2026-04-11 02:49:17.077493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e8a3f20d-ed3f-5f34-b319-d0862efd8412'}})  2026-04-11 02:49:17.077501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a718c651-a264-5d59-a3a1-3dddb23bb056'}})  2026-04-11 02:49:17.077510 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077519 | orchestrator | 2026-04-11 02:49:17.077527 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-11 02:49:17.077546 | orchestrator | Saturday 11 April 2026 02:49:13 +0000 (0:00:00.201) 0:00:44.335 ******** 2026-04-11 02:49:17.077555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e8a3f20d-ed3f-5f34-b319-d0862efd8412'}})  2026-04-11 02:49:17.077564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a718c651-a264-5d59-a3a1-3dddb23bb056'}})  2026-04-11 02:49:17.077573 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077633 | orchestrator | 2026-04-11 02:49:17.077644 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-11 02:49:17.077653 | orchestrator | Saturday 11 April 2026 02:49:13 +0000 (0:00:00.167) 0:00:44.503 ******** 2026-04-11 02:49:17.077662 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:49:17.077669 | orchestrator | 2026-04-11 02:49:17.077675 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-11 02:49:17.077682 | orchestrator | Saturday 11 April 2026 02:49:14 +0000 (0:00:00.174) 0:00:44.677 ******** 2026-04-11 02:49:17.077691 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:49:17.077699 | orchestrator | 2026-04-11 02:49:17.077707 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-11 02:49:17.077716 | orchestrator | Saturday 11 April 2026 02:49:14 +0000 (0:00:00.151) 0:00:44.829 ******** 2026-04-11 02:49:17.077726 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077735 | orchestrator | 2026-04-11 02:49:17.077744 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-11 02:49:17.077753 | orchestrator | Saturday 11 April 2026 02:49:14 +0000 (0:00:00.432) 0:00:45.262 ******** 2026-04-11 02:49:17.077763 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077771 | orchestrator | 2026-04-11 02:49:17.077780 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-11 02:49:17.077790 | orchestrator | Saturday 11 April 2026 02:49:14 +0000 (0:00:00.149) 0:00:45.411 ******** 2026-04-11 02:49:17.077799 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077808 | orchestrator | 2026-04-11 02:49:17.077818 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-11 02:49:17.077826 | orchestrator | Saturday 11 April 2026 02:49:15 +0000 (0:00:00.168) 0:00:45.580 ******** 2026-04-11 02:49:17.077835 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 02:49:17.077844 | orchestrator |  "ceph_osd_devices": { 2026-04-11 02:49:17.077853 | orchestrator |  "sdb": { 2026-04-11 02:49:17.077879 | orchestrator |  "osd_lvm_uuid": "e8a3f20d-ed3f-5f34-b319-d0862efd8412" 2026-04-11 02:49:17.077889 | orchestrator |  }, 2026-04-11 02:49:17.077899 | orchestrator |  "sdc": { 2026-04-11 02:49:17.077908 | orchestrator |  "osd_lvm_uuid": "a718c651-a264-5d59-a3a1-3dddb23bb056" 2026-04-11 02:49:17.077917 | orchestrator |  } 2026-04-11 02:49:17.077926 | orchestrator |  } 2026-04-11 02:49:17.077935 | orchestrator | } 2026-04-11 02:49:17.077943 | orchestrator | 2026-04-11 02:49:17.077960 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-11 02:49:17.077966 | orchestrator | Saturday 11 April 2026 02:49:15 +0000 (0:00:00.186) 0:00:45.766 ******** 2026-04-11 02:49:17.077971 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.077983 | orchestrator | 2026-04-11 02:49:17.077988 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-11 02:49:17.077993 | orchestrator | Saturday 11 April 2026 02:49:15 +0000 (0:00:00.160) 0:00:45.927 ******** 2026-04-11 02:49:17.077998 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.078003 | orchestrator | 2026-04-11 02:49:17.078008 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-11 02:49:17.078056 | orchestrator | Saturday 11 April 2026 02:49:15 +0000 (0:00:00.203) 0:00:46.130 ******** 2026-04-11 02:49:17.078064 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:49:17.078069 | orchestrator | 2026-04-11 02:49:17.078074 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-11 02:49:17.078080 | orchestrator | Saturday 11 April 2026 02:49:15 +0000 (0:00:00.157) 0:00:46.288 ******** 2026-04-11 02:49:17.078085 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 02:49:17.078090 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-11 02:49:17.078095 | orchestrator |  "ceph_osd_devices": { 2026-04-11 02:49:17.078100 | orchestrator |  "sdb": { 2026-04-11 02:49:17.078106 | orchestrator |  "osd_lvm_uuid": "e8a3f20d-ed3f-5f34-b319-d0862efd8412" 2026-04-11 02:49:17.078111 | orchestrator |  }, 2026-04-11 02:49:17.078116 | orchestrator |  "sdc": { 2026-04-11 02:49:17.078121 | orchestrator |  "osd_lvm_uuid": "a718c651-a264-5d59-a3a1-3dddb23bb056" 2026-04-11 02:49:17.078126 | orchestrator |  } 2026-04-11 02:49:17.078131 | orchestrator |  }, 2026-04-11 02:49:17.078136 | orchestrator |  "lvm_volumes": [ 2026-04-11 02:49:17.078142 | orchestrator |  { 2026-04-11 02:49:17.078147 | orchestrator |  "data": "osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412", 2026-04-11 02:49:17.078152 | orchestrator |  "data_vg": "ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412" 2026-04-11 02:49:17.078157 | orchestrator |  }, 2026-04-11 02:49:17.078163 | orchestrator |  { 2026-04-11 02:49:17.078168 | orchestrator |  "data": "osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056", 2026-04-11 02:49:17.078173 | orchestrator |  "data_vg": "ceph-a718c651-a264-5d59-a3a1-3dddb23bb056" 2026-04-11 02:49:17.078178 | orchestrator |  } 2026-04-11 02:49:17.078183 | orchestrator |  ] 2026-04-11 02:49:17.078188 | orchestrator |  } 2026-04-11 02:49:17.078193 | orchestrator | } 2026-04-11 02:49:17.078198 | orchestrator | 2026-04-11 02:49:17.078203 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-11 02:49:17.078208 | orchestrator | Saturday 11 April 2026 02:49:15 +0000 (0:00:00.234) 0:00:46.522 ******** 2026-04-11 02:49:17.078214 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-11 02:49:17.078219 | orchestrator | 2026-04-11 02:49:17.078224 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:49:17.078229 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 02:49:17.078236 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 02:49:17.078241 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 02:49:17.078246 | orchestrator | 2026-04-11 02:49:17.078251 | orchestrator | 2026-04-11 02:49:17.078256 | orchestrator | 2026-04-11 02:49:17.078261 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:49:17.078266 | orchestrator | Saturday 11 April 2026 02:49:17 +0000 (0:00:01.084) 0:00:47.607 ******** 2026-04-11 02:49:17.078272 | orchestrator | =============================================================================== 2026-04-11 02:49:17.078277 | orchestrator | Write configuration file ------------------------------------------------ 4.50s 2026-04-11 02:49:17.078286 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2026-04-11 02:49:17.078291 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2026-04-11 02:49:17.078296 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-04-11 02:49:17.078301 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-04-11 02:49:17.078306 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2026-04-11 02:49:17.078311 | orchestrator | Set DB devices config data ---------------------------------------------- 1.01s 2026-04-11 02:49:17.078316 | orchestrator | Print configuration data ------------------------------------------------ 0.97s 2026-04-11 02:49:17.078321 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2026-04-11 02:49:17.078326 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.84s 2026-04-11 02:49:17.078332 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-04-11 02:49:17.078337 | orchestrator | Get initial list of available block devices ----------------------------- 0.80s 2026-04-11 02:49:17.078342 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-04-11 02:49:17.078353 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-04-11 02:49:17.590411 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-04-11 02:49:17.590526 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.75s 2026-04-11 02:49:17.590542 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-04-11 02:49:17.590572 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-04-11 02:49:17.590630 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-04-11 02:49:17.590643 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-04-11 02:49:40.410566 | orchestrator | 2026-04-11 02:49:40 | INFO  | Task 73acc882-368d-420b-b825-fccd2b018bb6 (sync inventory) is running in background. Output coming soon. 2026-04-11 02:50:11.916277 | orchestrator | 2026-04-11 02:49:41 | INFO  | Starting group_vars file reorganization 2026-04-11 02:50:11.916398 | orchestrator | 2026-04-11 02:49:41 | INFO  | Moved 0 file(s) to their respective directories 2026-04-11 02:50:11.916430 | orchestrator | 2026-04-11 02:49:41 | INFO  | Group_vars file reorganization completed 2026-04-11 02:50:11.916452 | orchestrator | 2026-04-11 02:49:45 | INFO  | Starting variable preparation from inventory 2026-04-11 02:50:11.916474 | orchestrator | 2026-04-11 02:49:48 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-11 02:50:11.916493 | orchestrator | 2026-04-11 02:49:48 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-11 02:50:11.916513 | orchestrator | 2026-04-11 02:49:48 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-11 02:50:11.916551 | orchestrator | 2026-04-11 02:49:48 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-11 02:50:11.916576 | orchestrator | 2026-04-11 02:49:48 | INFO  | Variable preparation completed 2026-04-11 02:50:11.916597 | orchestrator | 2026-04-11 02:49:50 | INFO  | Starting inventory overwrite handling 2026-04-11 02:50:11.916617 | orchestrator | 2026-04-11 02:49:50 | INFO  | Handling group overwrites in 99-overwrite 2026-04-11 02:50:11.916638 | orchestrator | 2026-04-11 02:49:50 | INFO  | Removing group frr:children from 60-generic 2026-04-11 02:50:11.916658 | orchestrator | 2026-04-11 02:49:50 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-11 02:50:11.916730 | orchestrator | 2026-04-11 02:49:50 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-11 02:50:11.916785 | orchestrator | 2026-04-11 02:49:50 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-11 02:50:11.916805 | orchestrator | 2026-04-11 02:49:50 | INFO  | Handling group overwrites in 20-roles 2026-04-11 02:50:11.916825 | orchestrator | 2026-04-11 02:49:50 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-11 02:50:11.916844 | orchestrator | 2026-04-11 02:49:50 | INFO  | Removed 5 group(s) in total 2026-04-11 02:50:11.916863 | orchestrator | 2026-04-11 02:49:50 | INFO  | Inventory overwrite handling completed 2026-04-11 02:50:11.916881 | orchestrator | 2026-04-11 02:49:51 | INFO  | Starting merge of inventory files 2026-04-11 02:50:11.916900 | orchestrator | 2026-04-11 02:49:51 | INFO  | Inventory files merged successfully 2026-04-11 02:50:11.916919 | orchestrator | 2026-04-11 02:49:57 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-11 02:50:11.916937 | orchestrator | 2026-04-11 02:50:10 | INFO  | Successfully wrote ClusterShell configuration 2026-04-11 02:50:11.916957 | orchestrator | [master 03de62d] 2026-04-11-02-50 2026-04-11 02:50:11.916976 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-04-11 02:50:14.358181 | orchestrator | 2026-04-11 02:50:14 | INFO  | Task 49c6abc4-ae02-4671-8230-a5ca21f7ea93 (ceph-create-lvm-devices) was prepared for execution. 2026-04-11 02:50:14.358294 | orchestrator | 2026-04-11 02:50:14 | INFO  | It takes a moment until task 49c6abc4-ae02-4671-8230-a5ca21f7ea93 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-11 02:50:27.516303 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 02:50:27.516407 | orchestrator | 2.16.14 2026-04-11 02:50:27.516422 | orchestrator | 2026-04-11 02:50:27.516434 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-11 02:50:27.516445 | orchestrator | 2026-04-11 02:50:27.516455 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 02:50:27.516465 | orchestrator | Saturday 11 April 2026 02:50:19 +0000 (0:00:00.349) 0:00:00.349 ******** 2026-04-11 02:50:27.516475 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 02:50:27.516485 | orchestrator | 2026-04-11 02:50:27.516495 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 02:50:27.516505 | orchestrator | Saturday 11 April 2026 02:50:19 +0000 (0:00:00.273) 0:00:00.623 ******** 2026-04-11 02:50:27.516515 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:27.516525 | orchestrator | 2026-04-11 02:50:27.516535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516545 | orchestrator | Saturday 11 April 2026 02:50:19 +0000 (0:00:00.250) 0:00:00.874 ******** 2026-04-11 02:50:27.516555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-11 02:50:27.516579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-11 02:50:27.516590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-11 02:50:27.516599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-11 02:50:27.516609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-11 02:50:27.516617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-11 02:50:27.516625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-11 02:50:27.516633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-11 02:50:27.516641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-11 02:50:27.516649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-11 02:50:27.516678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-11 02:50:27.516686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-11 02:50:27.516726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-11 02:50:27.516735 | orchestrator | 2026-04-11 02:50:27.516743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516751 | orchestrator | Saturday 11 April 2026 02:50:20 +0000 (0:00:00.597) 0:00:01.471 ******** 2026-04-11 02:50:27.516759 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.516767 | orchestrator | 2026-04-11 02:50:27.516775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516783 | orchestrator | Saturday 11 April 2026 02:50:20 +0000 (0:00:00.223) 0:00:01.695 ******** 2026-04-11 02:50:27.516791 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.516799 | orchestrator | 2026-04-11 02:50:27.516806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516814 | orchestrator | Saturday 11 April 2026 02:50:20 +0000 (0:00:00.225) 0:00:01.920 ******** 2026-04-11 02:50:27.516822 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.516830 | orchestrator | 2026-04-11 02:50:27.516838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516847 | orchestrator | Saturday 11 April 2026 02:50:21 +0000 (0:00:00.218) 0:00:02.138 ******** 2026-04-11 02:50:27.516856 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.516865 | orchestrator | 2026-04-11 02:50:27.516874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516884 | orchestrator | Saturday 11 April 2026 02:50:21 +0000 (0:00:00.218) 0:00:02.357 ******** 2026-04-11 02:50:27.516893 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.516901 | orchestrator | 2026-04-11 02:50:27.516910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516920 | orchestrator | Saturday 11 April 2026 02:50:21 +0000 (0:00:00.209) 0:00:02.567 ******** 2026-04-11 02:50:27.516929 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.516938 | orchestrator | 2026-04-11 02:50:27.516947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516956 | orchestrator | Saturday 11 April 2026 02:50:21 +0000 (0:00:00.216) 0:00:02.783 ******** 2026-04-11 02:50:27.516966 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.516974 | orchestrator | 2026-04-11 02:50:27.516983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.516993 | orchestrator | Saturday 11 April 2026 02:50:21 +0000 (0:00:00.225) 0:00:03.009 ******** 2026-04-11 02:50:27.517002 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.517011 | orchestrator | 2026-04-11 02:50:27.517020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.517029 | orchestrator | Saturday 11 April 2026 02:50:22 +0000 (0:00:00.237) 0:00:03.246 ******** 2026-04-11 02:50:27.517038 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a) 2026-04-11 02:50:27.517048 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a) 2026-04-11 02:50:27.517057 | orchestrator | 2026-04-11 02:50:27.517066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.517089 | orchestrator | Saturday 11 April 2026 02:50:22 +0000 (0:00:00.463) 0:00:03.709 ******** 2026-04-11 02:50:27.517099 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c) 2026-04-11 02:50:27.517108 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c) 2026-04-11 02:50:27.517117 | orchestrator | 2026-04-11 02:50:27.517126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.517142 | orchestrator | Saturday 11 April 2026 02:50:23 +0000 (0:00:00.716) 0:00:04.425 ******** 2026-04-11 02:50:27.517151 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898) 2026-04-11 02:50:27.517160 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898) 2026-04-11 02:50:27.517169 | orchestrator | 2026-04-11 02:50:27.517189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.517199 | orchestrator | Saturday 11 April 2026 02:50:24 +0000 (0:00:00.723) 0:00:05.149 ******** 2026-04-11 02:50:27.517207 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7) 2026-04-11 02:50:27.517228 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7) 2026-04-11 02:50:27.517237 | orchestrator | 2026-04-11 02:50:27.517245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:27.517253 | orchestrator | Saturday 11 April 2026 02:50:25 +0000 (0:00:00.991) 0:00:06.140 ******** 2026-04-11 02:50:27.517261 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 02:50:27.517269 | orchestrator | 2026-04-11 02:50:27.517277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:27.517285 | orchestrator | Saturday 11 April 2026 02:50:25 +0000 (0:00:00.376) 0:00:06.517 ******** 2026-04-11 02:50:27.517293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-11 02:50:27.517301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-11 02:50:27.517309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-11 02:50:27.517317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-11 02:50:27.517324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-11 02:50:27.517332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-11 02:50:27.517340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-11 02:50:27.517348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-11 02:50:27.517356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-11 02:50:27.517364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-11 02:50:27.517371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-11 02:50:27.517379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-11 02:50:27.517387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-11 02:50:27.517395 | orchestrator | 2026-04-11 02:50:27.517403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:27.517410 | orchestrator | Saturday 11 April 2026 02:50:25 +0000 (0:00:00.448) 0:00:06.965 ******** 2026-04-11 02:50:27.517418 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.517426 | orchestrator | 2026-04-11 02:50:27.517434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:27.517442 | orchestrator | Saturday 11 April 2026 02:50:26 +0000 (0:00:00.247) 0:00:07.213 ******** 2026-04-11 02:50:27.517450 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.517458 | orchestrator | 2026-04-11 02:50:27.517466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:27.517473 | orchestrator | Saturday 11 April 2026 02:50:26 +0000 (0:00:00.215) 0:00:07.428 ******** 2026-04-11 02:50:27.517481 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.517495 | orchestrator | 2026-04-11 02:50:27.517503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:27.517511 | orchestrator | Saturday 11 April 2026 02:50:26 +0000 (0:00:00.216) 0:00:07.645 ******** 2026-04-11 02:50:27.517519 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.517527 | orchestrator | 2026-04-11 02:50:27.517535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:27.517543 | orchestrator | Saturday 11 April 2026 02:50:26 +0000 (0:00:00.239) 0:00:07.884 ******** 2026-04-11 02:50:27.517551 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.517558 | orchestrator | 2026-04-11 02:50:27.517566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:27.517574 | orchestrator | Saturday 11 April 2026 02:50:27 +0000 (0:00:00.223) 0:00:08.108 ******** 2026-04-11 02:50:27.517582 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.517590 | orchestrator | 2026-04-11 02:50:27.517597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:27.517605 | orchestrator | Saturday 11 April 2026 02:50:27 +0000 (0:00:00.209) 0:00:08.318 ******** 2026-04-11 02:50:27.517613 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:27.517621 | orchestrator | 2026-04-11 02:50:27.517633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:36.230624 | orchestrator | Saturday 11 April 2026 02:50:27 +0000 (0:00:00.236) 0:00:08.555 ******** 2026-04-11 02:50:36.230820 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.230846 | orchestrator | 2026-04-11 02:50:36.230866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:36.230880 | orchestrator | Saturday 11 April 2026 02:50:28 +0000 (0:00:00.716) 0:00:09.271 ******** 2026-04-11 02:50:36.230891 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-11 02:50:36.230901 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-11 02:50:36.230911 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-11 02:50:36.230921 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-11 02:50:36.230931 | orchestrator | 2026-04-11 02:50:36.230941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:36.230951 | orchestrator | Saturday 11 April 2026 02:50:28 +0000 (0:00:00.757) 0:00:10.028 ******** 2026-04-11 02:50:36.230960 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.230987 | orchestrator | 2026-04-11 02:50:36.231007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:36.231017 | orchestrator | Saturday 11 April 2026 02:50:29 +0000 (0:00:00.237) 0:00:10.266 ******** 2026-04-11 02:50:36.231027 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231036 | orchestrator | 2026-04-11 02:50:36.231062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:36.231072 | orchestrator | Saturday 11 April 2026 02:50:29 +0000 (0:00:00.270) 0:00:10.536 ******** 2026-04-11 02:50:36.231082 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231091 | orchestrator | 2026-04-11 02:50:36.231101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:50:36.231111 | orchestrator | Saturday 11 April 2026 02:50:29 +0000 (0:00:00.240) 0:00:10.777 ******** 2026-04-11 02:50:36.231121 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231131 | orchestrator | 2026-04-11 02:50:36.231142 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-11 02:50:36.231153 | orchestrator | Saturday 11 April 2026 02:50:29 +0000 (0:00:00.224) 0:00:11.001 ******** 2026-04-11 02:50:36.231164 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231175 | orchestrator | 2026-04-11 02:50:36.231186 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-11 02:50:36.231198 | orchestrator | Saturday 11 April 2026 02:50:30 +0000 (0:00:00.180) 0:00:11.181 ******** 2026-04-11 02:50:36.231210 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5955808-db0e-564c-b1b7-e2d336084003'}}) 2026-04-11 02:50:36.231245 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}}) 2026-04-11 02:50:36.231256 | orchestrator | 2026-04-11 02:50:36.231268 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-11 02:50:36.231280 | orchestrator | Saturday 11 April 2026 02:50:30 +0000 (0:00:00.207) 0:00:11.388 ******** 2026-04-11 02:50:36.231293 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}) 2026-04-11 02:50:36.231305 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}) 2026-04-11 02:50:36.231316 | orchestrator | 2026-04-11 02:50:36.231328 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-11 02:50:36.231339 | orchestrator | Saturday 11 April 2026 02:50:32 +0000 (0:00:01.994) 0:00:13.383 ******** 2026-04-11 02:50:36.231350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:36.231363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:36.231375 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231386 | orchestrator | 2026-04-11 02:50:36.231397 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-11 02:50:36.231408 | orchestrator | Saturday 11 April 2026 02:50:32 +0000 (0:00:00.171) 0:00:13.555 ******** 2026-04-11 02:50:36.231419 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}) 2026-04-11 02:50:36.231430 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}) 2026-04-11 02:50:36.231442 | orchestrator | 2026-04-11 02:50:36.231453 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-11 02:50:36.231464 | orchestrator | Saturday 11 April 2026 02:50:33 +0000 (0:00:01.481) 0:00:15.036 ******** 2026-04-11 02:50:36.231475 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:36.231486 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:36.231496 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231506 | orchestrator | 2026-04-11 02:50:36.231516 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-11 02:50:36.231526 | orchestrator | Saturday 11 April 2026 02:50:34 +0000 (0:00:00.168) 0:00:15.205 ******** 2026-04-11 02:50:36.231554 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231564 | orchestrator | 2026-04-11 02:50:36.231580 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-11 02:50:36.231597 | orchestrator | Saturday 11 April 2026 02:50:34 +0000 (0:00:00.404) 0:00:15.610 ******** 2026-04-11 02:50:36.231612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:36.231628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:36.231643 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231659 | orchestrator | 2026-04-11 02:50:36.231674 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-11 02:50:36.231690 | orchestrator | Saturday 11 April 2026 02:50:34 +0000 (0:00:00.177) 0:00:15.787 ******** 2026-04-11 02:50:36.231743 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231760 | orchestrator | 2026-04-11 02:50:36.231776 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-11 02:50:36.231793 | orchestrator | Saturday 11 April 2026 02:50:34 +0000 (0:00:00.156) 0:00:15.944 ******** 2026-04-11 02:50:36.231820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:36.231839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:36.231856 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231875 | orchestrator | 2026-04-11 02:50:36.231893 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-11 02:50:36.231910 | orchestrator | Saturday 11 April 2026 02:50:35 +0000 (0:00:00.166) 0:00:16.110 ******** 2026-04-11 02:50:36.231923 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.231940 | orchestrator | 2026-04-11 02:50:36.231955 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-11 02:50:36.231972 | orchestrator | Saturday 11 April 2026 02:50:35 +0000 (0:00:00.150) 0:00:16.261 ******** 2026-04-11 02:50:36.231988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:36.232005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:36.232018 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.232027 | orchestrator | 2026-04-11 02:50:36.232037 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-11 02:50:36.232047 | orchestrator | Saturday 11 April 2026 02:50:35 +0000 (0:00:00.165) 0:00:16.426 ******** 2026-04-11 02:50:36.232057 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:36.232067 | orchestrator | 2026-04-11 02:50:36.232076 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-11 02:50:36.232086 | orchestrator | Saturday 11 April 2026 02:50:35 +0000 (0:00:00.161) 0:00:16.588 ******** 2026-04-11 02:50:36.232096 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:36.232106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:36.232115 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.232125 | orchestrator | 2026-04-11 02:50:36.232134 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-11 02:50:36.232144 | orchestrator | Saturday 11 April 2026 02:50:35 +0000 (0:00:00.160) 0:00:16.748 ******** 2026-04-11 02:50:36.232154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:36.232164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:36.232173 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.232183 | orchestrator | 2026-04-11 02:50:36.232192 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-11 02:50:36.232202 | orchestrator | Saturday 11 April 2026 02:50:35 +0000 (0:00:00.171) 0:00:16.919 ******** 2026-04-11 02:50:36.232212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:36.232222 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:36.232240 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.232249 | orchestrator | 2026-04-11 02:50:36.232259 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-11 02:50:36.232269 | orchestrator | Saturday 11 April 2026 02:50:36 +0000 (0:00:00.209) 0:00:17.129 ******** 2026-04-11 02:50:36.232278 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:36.232288 | orchestrator | 2026-04-11 02:50:36.232298 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-11 02:50:36.232318 | orchestrator | Saturday 11 April 2026 02:50:36 +0000 (0:00:00.141) 0:00:17.270 ******** 2026-04-11 02:50:43.428570 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.428697 | orchestrator | 2026-04-11 02:50:43.428767 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-11 02:50:43.428780 | orchestrator | Saturday 11 April 2026 02:50:36 +0000 (0:00:00.145) 0:00:17.416 ******** 2026-04-11 02:50:43.428788 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.428796 | orchestrator | 2026-04-11 02:50:43.428807 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-11 02:50:43.428819 | orchestrator | Saturday 11 April 2026 02:50:36 +0000 (0:00:00.426) 0:00:17.843 ******** 2026-04-11 02:50:43.428832 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 02:50:43.428845 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-11 02:50:43.428857 | orchestrator | } 2026-04-11 02:50:43.428869 | orchestrator | 2026-04-11 02:50:43.428881 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-11 02:50:43.428894 | orchestrator | Saturday 11 April 2026 02:50:36 +0000 (0:00:00.161) 0:00:18.004 ******** 2026-04-11 02:50:43.428906 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 02:50:43.428917 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-11 02:50:43.428924 | orchestrator | } 2026-04-11 02:50:43.428932 | orchestrator | 2026-04-11 02:50:43.428939 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-11 02:50:43.428961 | orchestrator | Saturday 11 April 2026 02:50:37 +0000 (0:00:00.154) 0:00:18.159 ******** 2026-04-11 02:50:43.428969 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 02:50:43.428977 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-11 02:50:43.428984 | orchestrator | } 2026-04-11 02:50:43.428991 | orchestrator | 2026-04-11 02:50:43.428999 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-11 02:50:43.429006 | orchestrator | Saturday 11 April 2026 02:50:37 +0000 (0:00:00.158) 0:00:18.317 ******** 2026-04-11 02:50:43.429014 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:43.429021 | orchestrator | 2026-04-11 02:50:43.429028 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-11 02:50:43.429035 | orchestrator | Saturday 11 April 2026 02:50:37 +0000 (0:00:00.713) 0:00:19.031 ******** 2026-04-11 02:50:43.429042 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:43.429050 | orchestrator | 2026-04-11 02:50:43.429057 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-11 02:50:43.429064 | orchestrator | Saturday 11 April 2026 02:50:38 +0000 (0:00:00.549) 0:00:19.580 ******** 2026-04-11 02:50:43.429071 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:43.429078 | orchestrator | 2026-04-11 02:50:43.429111 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-11 02:50:43.429120 | orchestrator | Saturday 11 April 2026 02:50:39 +0000 (0:00:00.538) 0:00:20.118 ******** 2026-04-11 02:50:43.429128 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:43.429136 | orchestrator | 2026-04-11 02:50:43.429144 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-11 02:50:43.429152 | orchestrator | Saturday 11 April 2026 02:50:39 +0000 (0:00:00.161) 0:00:20.280 ******** 2026-04-11 02:50:43.429161 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429169 | orchestrator | 2026-04-11 02:50:43.429177 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-11 02:50:43.429204 | orchestrator | Saturday 11 April 2026 02:50:39 +0000 (0:00:00.125) 0:00:20.406 ******** 2026-04-11 02:50:43.429213 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429222 | orchestrator | 2026-04-11 02:50:43.429230 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-11 02:50:43.429238 | orchestrator | Saturday 11 April 2026 02:50:39 +0000 (0:00:00.122) 0:00:20.528 ******** 2026-04-11 02:50:43.429246 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 02:50:43.429255 | orchestrator |  "vgs_report": { 2026-04-11 02:50:43.429264 | orchestrator |  "vg": [] 2026-04-11 02:50:43.429272 | orchestrator |  } 2026-04-11 02:50:43.429280 | orchestrator | } 2026-04-11 02:50:43.429289 | orchestrator | 2026-04-11 02:50:43.429297 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-11 02:50:43.429306 | orchestrator | Saturday 11 April 2026 02:50:39 +0000 (0:00:00.172) 0:00:20.700 ******** 2026-04-11 02:50:43.429314 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429322 | orchestrator | 2026-04-11 02:50:43.429330 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-11 02:50:43.429338 | orchestrator | Saturday 11 April 2026 02:50:39 +0000 (0:00:00.147) 0:00:20.848 ******** 2026-04-11 02:50:43.429347 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429355 | orchestrator | 2026-04-11 02:50:43.429363 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-11 02:50:43.429372 | orchestrator | Saturday 11 April 2026 02:50:40 +0000 (0:00:00.413) 0:00:21.262 ******** 2026-04-11 02:50:43.429380 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429388 | orchestrator | 2026-04-11 02:50:43.429396 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-11 02:50:43.429405 | orchestrator | Saturday 11 April 2026 02:50:40 +0000 (0:00:00.160) 0:00:21.422 ******** 2026-04-11 02:50:43.429413 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429421 | orchestrator | 2026-04-11 02:50:43.429429 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-11 02:50:43.429438 | orchestrator | Saturday 11 April 2026 02:50:40 +0000 (0:00:00.150) 0:00:21.572 ******** 2026-04-11 02:50:43.429446 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429454 | orchestrator | 2026-04-11 02:50:43.429462 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-11 02:50:43.429470 | orchestrator | Saturday 11 April 2026 02:50:40 +0000 (0:00:00.158) 0:00:21.730 ******** 2026-04-11 02:50:43.429478 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429485 | orchestrator | 2026-04-11 02:50:43.429492 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-11 02:50:43.429499 | orchestrator | Saturday 11 April 2026 02:50:40 +0000 (0:00:00.152) 0:00:21.883 ******** 2026-04-11 02:50:43.429506 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429514 | orchestrator | 2026-04-11 02:50:43.429521 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-11 02:50:43.429528 | orchestrator | Saturday 11 April 2026 02:50:40 +0000 (0:00:00.151) 0:00:22.034 ******** 2026-04-11 02:50:43.429550 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429558 | orchestrator | 2026-04-11 02:50:43.429566 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-11 02:50:43.429573 | orchestrator | Saturday 11 April 2026 02:50:41 +0000 (0:00:00.163) 0:00:22.198 ******** 2026-04-11 02:50:43.429594 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429610 | orchestrator | 2026-04-11 02:50:43.429617 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-11 02:50:43.429625 | orchestrator | Saturday 11 April 2026 02:50:41 +0000 (0:00:00.152) 0:00:22.350 ******** 2026-04-11 02:50:43.429632 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429639 | orchestrator | 2026-04-11 02:50:43.429646 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-11 02:50:43.429653 | orchestrator | Saturday 11 April 2026 02:50:41 +0000 (0:00:00.149) 0:00:22.500 ******** 2026-04-11 02:50:43.429666 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429674 | orchestrator | 2026-04-11 02:50:43.429681 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-11 02:50:43.429688 | orchestrator | Saturday 11 April 2026 02:50:41 +0000 (0:00:00.152) 0:00:22.653 ******** 2026-04-11 02:50:43.429695 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429702 | orchestrator | 2026-04-11 02:50:43.429741 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-11 02:50:43.429754 | orchestrator | Saturday 11 April 2026 02:50:41 +0000 (0:00:00.170) 0:00:22.823 ******** 2026-04-11 02:50:43.429762 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429769 | orchestrator | 2026-04-11 02:50:43.429776 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-11 02:50:43.429783 | orchestrator | Saturday 11 April 2026 02:50:41 +0000 (0:00:00.156) 0:00:22.979 ******** 2026-04-11 02:50:43.429790 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429797 | orchestrator | 2026-04-11 02:50:43.429804 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-11 02:50:43.429812 | orchestrator | Saturday 11 April 2026 02:50:42 +0000 (0:00:00.439) 0:00:23.419 ******** 2026-04-11 02:50:43.429820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:43.429830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:43.429837 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429844 | orchestrator | 2026-04-11 02:50:43.429851 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-11 02:50:43.429859 | orchestrator | Saturday 11 April 2026 02:50:42 +0000 (0:00:00.165) 0:00:23.584 ******** 2026-04-11 02:50:43.429866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:43.429873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:43.429881 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429888 | orchestrator | 2026-04-11 02:50:43.429895 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-11 02:50:43.429902 | orchestrator | Saturday 11 April 2026 02:50:42 +0000 (0:00:00.173) 0:00:23.758 ******** 2026-04-11 02:50:43.429909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:43.429916 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:43.429924 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429931 | orchestrator | 2026-04-11 02:50:43.429938 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-11 02:50:43.429945 | orchestrator | Saturday 11 April 2026 02:50:42 +0000 (0:00:00.172) 0:00:23.931 ******** 2026-04-11 02:50:43.429952 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:43.429960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:43.429967 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.429974 | orchestrator | 2026-04-11 02:50:43.429981 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-11 02:50:43.429988 | orchestrator | Saturday 11 April 2026 02:50:43 +0000 (0:00:00.185) 0:00:24.116 ******** 2026-04-11 02:50:43.430002 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:43.430009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:43.430062 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:43.430072 | orchestrator | 2026-04-11 02:50:43.430079 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-11 02:50:43.430086 | orchestrator | Saturday 11 April 2026 02:50:43 +0000 (0:00:00.162) 0:00:24.279 ******** 2026-04-11 02:50:43.430101 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:49.178393 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:49.178500 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:49.178516 | orchestrator | 2026-04-11 02:50:49.178529 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-11 02:50:49.178542 | orchestrator | Saturday 11 April 2026 02:50:43 +0000 (0:00:00.193) 0:00:24.472 ******** 2026-04-11 02:50:49.178553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:49.178565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:49.178576 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:49.178587 | orchestrator | 2026-04-11 02:50:49.178613 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-11 02:50:49.178625 | orchestrator | Saturday 11 April 2026 02:50:43 +0000 (0:00:00.168) 0:00:24.640 ******** 2026-04-11 02:50:49.178636 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:49.178647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:49.178658 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:49.178669 | orchestrator | 2026-04-11 02:50:49.178680 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-11 02:50:49.178691 | orchestrator | Saturday 11 April 2026 02:50:43 +0000 (0:00:00.165) 0:00:24.806 ******** 2026-04-11 02:50:49.178702 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:49.178713 | orchestrator | 2026-04-11 02:50:49.178757 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-11 02:50:49.178770 | orchestrator | Saturday 11 April 2026 02:50:44 +0000 (0:00:00.548) 0:00:25.354 ******** 2026-04-11 02:50:49.178780 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:49.178791 | orchestrator | 2026-04-11 02:50:49.178802 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-11 02:50:49.178813 | orchestrator | Saturday 11 April 2026 02:50:44 +0000 (0:00:00.525) 0:00:25.880 ******** 2026-04-11 02:50:49.178824 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:50:49.178834 | orchestrator | 2026-04-11 02:50:49.178845 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-11 02:50:49.178857 | orchestrator | Saturday 11 April 2026 02:50:44 +0000 (0:00:00.160) 0:00:26.040 ******** 2026-04-11 02:50:49.178868 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'vg_name': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}) 2026-04-11 02:50:49.178880 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'vg_name': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}) 2026-04-11 02:50:49.178916 | orchestrator | 2026-04-11 02:50:49.178929 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-11 02:50:49.178942 | orchestrator | Saturday 11 April 2026 02:50:45 +0000 (0:00:00.194) 0:00:26.234 ******** 2026-04-11 02:50:49.178954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:49.178967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:49.178980 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:49.178992 | orchestrator | 2026-04-11 02:50:49.179004 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-11 02:50:49.179016 | orchestrator | Saturday 11 April 2026 02:50:45 +0000 (0:00:00.409) 0:00:26.644 ******** 2026-04-11 02:50:49.179029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:49.179041 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:49.179054 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:49.179066 | orchestrator | 2026-04-11 02:50:49.179078 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-11 02:50:49.179090 | orchestrator | Saturday 11 April 2026 02:50:45 +0000 (0:00:00.174) 0:00:26.818 ******** 2026-04-11 02:50:49.179102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 02:50:49.179115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 02:50:49.179127 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:50:49.179139 | orchestrator | 2026-04-11 02:50:49.179151 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-11 02:50:49.179163 | orchestrator | Saturday 11 April 2026 02:50:45 +0000 (0:00:00.183) 0:00:27.002 ******** 2026-04-11 02:50:49.179193 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 02:50:49.179206 | orchestrator |  "lvm_report": { 2026-04-11 02:50:49.179219 | orchestrator |  "lv": [ 2026-04-11 02:50:49.179232 | orchestrator |  { 2026-04-11 02:50:49.179244 | orchestrator |  "lv_name": "osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200", 2026-04-11 02:50:49.179284 | orchestrator |  "vg_name": "ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200" 2026-04-11 02:50:49.179296 | orchestrator |  }, 2026-04-11 02:50:49.179307 | orchestrator |  { 2026-04-11 02:50:49.179317 | orchestrator |  "lv_name": "osd-block-c5955808-db0e-564c-b1b7-e2d336084003", 2026-04-11 02:50:49.179328 | orchestrator |  "vg_name": "ceph-c5955808-db0e-564c-b1b7-e2d336084003" 2026-04-11 02:50:49.179339 | orchestrator |  } 2026-04-11 02:50:49.179349 | orchestrator |  ], 2026-04-11 02:50:49.179360 | orchestrator |  "pv": [ 2026-04-11 02:50:49.179371 | orchestrator |  { 2026-04-11 02:50:49.179381 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-11 02:50:49.179392 | orchestrator |  "vg_name": "ceph-c5955808-db0e-564c-b1b7-e2d336084003" 2026-04-11 02:50:49.179403 | orchestrator |  }, 2026-04-11 02:50:49.179413 | orchestrator |  { 2026-04-11 02:50:49.179431 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-11 02:50:49.179442 | orchestrator |  "vg_name": "ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200" 2026-04-11 02:50:49.179452 | orchestrator |  } 2026-04-11 02:50:49.179463 | orchestrator |  ] 2026-04-11 02:50:49.179474 | orchestrator |  } 2026-04-11 02:50:49.179485 | orchestrator | } 2026-04-11 02:50:49.179506 | orchestrator | 2026-04-11 02:50:49.179518 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-11 02:50:49.179528 | orchestrator | 2026-04-11 02:50:49.179539 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 02:50:49.179550 | orchestrator | Saturday 11 April 2026 02:50:46 +0000 (0:00:00.358) 0:00:27.360 ******** 2026-04-11 02:50:49.179561 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-11 02:50:49.179572 | orchestrator | 2026-04-11 02:50:49.179583 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 02:50:49.179594 | orchestrator | Saturday 11 April 2026 02:50:46 +0000 (0:00:00.261) 0:00:27.622 ******** 2026-04-11 02:50:49.179605 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:50:49.179615 | orchestrator | 2026-04-11 02:50:49.179626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:49.179644 | orchestrator | Saturday 11 April 2026 02:50:46 +0000 (0:00:00.267) 0:00:27.889 ******** 2026-04-11 02:50:49.179664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-11 02:50:49.179687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-11 02:50:49.179716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-11 02:50:49.179759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-11 02:50:49.179776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-11 02:50:49.179794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-11 02:50:49.179810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-11 02:50:49.179828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-11 02:50:49.179846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-11 02:50:49.179863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-11 02:50:49.179881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-11 02:50:49.179899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-11 02:50:49.179917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-11 02:50:49.179936 | orchestrator | 2026-04-11 02:50:49.179954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:49.180079 | orchestrator | Saturday 11 April 2026 02:50:47 +0000 (0:00:00.463) 0:00:28.353 ******** 2026-04-11 02:50:49.180092 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:50:49.180102 | orchestrator | 2026-04-11 02:50:49.180113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:49.180124 | orchestrator | Saturday 11 April 2026 02:50:47 +0000 (0:00:00.224) 0:00:28.577 ******** 2026-04-11 02:50:49.180135 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:50:49.180146 | orchestrator | 2026-04-11 02:50:49.180157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:49.180168 | orchestrator | Saturday 11 April 2026 02:50:48 +0000 (0:00:00.708) 0:00:29.285 ******** 2026-04-11 02:50:49.180179 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:50:49.180189 | orchestrator | 2026-04-11 02:50:49.180200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:49.180211 | orchestrator | Saturday 11 April 2026 02:50:48 +0000 (0:00:00.264) 0:00:29.549 ******** 2026-04-11 02:50:49.180222 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:50:49.180233 | orchestrator | 2026-04-11 02:50:49.180243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:49.180292 | orchestrator | Saturday 11 April 2026 02:50:48 +0000 (0:00:00.219) 0:00:29.768 ******** 2026-04-11 02:50:49.180316 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:50:49.180327 | orchestrator | 2026-04-11 02:50:49.180338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:50:49.180352 | orchestrator | Saturday 11 April 2026 02:50:48 +0000 (0:00:00.213) 0:00:29.982 ******** 2026-04-11 02:50:49.180371 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:50:49.180392 | orchestrator | 2026-04-11 02:50:49.180434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:01.658316 | orchestrator | Saturday 11 April 2026 02:50:49 +0000 (0:00:00.238) 0:00:30.221 ******** 2026-04-11 02:51:01.658444 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.658465 | orchestrator | 2026-04-11 02:51:01.658482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:01.658499 | orchestrator | Saturday 11 April 2026 02:50:49 +0000 (0:00:00.245) 0:00:30.466 ******** 2026-04-11 02:51:01.658515 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.658530 | orchestrator | 2026-04-11 02:51:01.658545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:01.658562 | orchestrator | Saturday 11 April 2026 02:50:49 +0000 (0:00:00.223) 0:00:30.689 ******** 2026-04-11 02:51:01.658577 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4) 2026-04-11 02:51:01.658595 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4) 2026-04-11 02:51:01.658610 | orchestrator | 2026-04-11 02:51:01.658644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:01.658660 | orchestrator | Saturday 11 April 2026 02:50:50 +0000 (0:00:00.438) 0:00:31.128 ******** 2026-04-11 02:51:01.658675 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb) 2026-04-11 02:51:01.658690 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb) 2026-04-11 02:51:01.658705 | orchestrator | 2026-04-11 02:51:01.658721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:01.658736 | orchestrator | Saturday 11 April 2026 02:50:50 +0000 (0:00:00.502) 0:00:31.631 ******** 2026-04-11 02:51:01.658782 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f) 2026-04-11 02:51:01.658798 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f) 2026-04-11 02:51:01.658814 | orchestrator | 2026-04-11 02:51:01.658829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:01.658846 | orchestrator | Saturday 11 April 2026 02:50:51 +0000 (0:00:00.496) 0:00:32.128 ******** 2026-04-11 02:51:01.658863 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac) 2026-04-11 02:51:01.658879 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac) 2026-04-11 02:51:01.658896 | orchestrator | 2026-04-11 02:51:01.658912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:01.658930 | orchestrator | Saturday 11 April 2026 02:50:51 +0000 (0:00:00.765) 0:00:32.893 ******** 2026-04-11 02:51:01.658947 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 02:51:01.658964 | orchestrator | 2026-04-11 02:51:01.658980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.658997 | orchestrator | Saturday 11 April 2026 02:50:52 +0000 (0:00:00.664) 0:00:33.557 ******** 2026-04-11 02:51:01.659012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-11 02:51:01.659029 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-11 02:51:01.659046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-11 02:51:01.659093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-11 02:51:01.659111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-11 02:51:01.659129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-11 02:51:01.659146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-11 02:51:01.659163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-11 02:51:01.659179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-11 02:51:01.659197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-11 02:51:01.659214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-11 02:51:01.659230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-11 02:51:01.659245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-11 02:51:01.659262 | orchestrator | 2026-04-11 02:51:01.659277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659292 | orchestrator | Saturday 11 April 2026 02:50:53 +0000 (0:00:01.015) 0:00:34.573 ******** 2026-04-11 02:51:01.659307 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659322 | orchestrator | 2026-04-11 02:51:01.659337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659352 | orchestrator | Saturday 11 April 2026 02:50:53 +0000 (0:00:00.261) 0:00:34.834 ******** 2026-04-11 02:51:01.659367 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659382 | orchestrator | 2026-04-11 02:51:01.659397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659412 | orchestrator | Saturday 11 April 2026 02:50:54 +0000 (0:00:00.235) 0:00:35.069 ******** 2026-04-11 02:51:01.659427 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659442 | orchestrator | 2026-04-11 02:51:01.659483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659499 | orchestrator | Saturday 11 April 2026 02:50:54 +0000 (0:00:00.246) 0:00:35.315 ******** 2026-04-11 02:51:01.659514 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659529 | orchestrator | 2026-04-11 02:51:01.659544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659559 | orchestrator | Saturday 11 April 2026 02:50:54 +0000 (0:00:00.224) 0:00:35.540 ******** 2026-04-11 02:51:01.659574 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659589 | orchestrator | 2026-04-11 02:51:01.659604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659620 | orchestrator | Saturday 11 April 2026 02:50:54 +0000 (0:00:00.224) 0:00:35.764 ******** 2026-04-11 02:51:01.659635 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659650 | orchestrator | 2026-04-11 02:51:01.659666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659681 | orchestrator | Saturday 11 April 2026 02:50:54 +0000 (0:00:00.270) 0:00:36.034 ******** 2026-04-11 02:51:01.659706 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659722 | orchestrator | 2026-04-11 02:51:01.659738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659780 | orchestrator | Saturday 11 April 2026 02:50:55 +0000 (0:00:00.246) 0:00:36.281 ******** 2026-04-11 02:51:01.659795 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659810 | orchestrator | 2026-04-11 02:51:01.659825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659840 | orchestrator | Saturday 11 April 2026 02:50:55 +0000 (0:00:00.221) 0:00:36.502 ******** 2026-04-11 02:51:01.659855 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-11 02:51:01.659888 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-11 02:51:01.659905 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-11 02:51:01.659921 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-11 02:51:01.659937 | orchestrator | 2026-04-11 02:51:01.659953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.659968 | orchestrator | Saturday 11 April 2026 02:50:56 +0000 (0:00:00.955) 0:00:37.458 ******** 2026-04-11 02:51:01.659981 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.659993 | orchestrator | 2026-04-11 02:51:01.660005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.660018 | orchestrator | Saturday 11 April 2026 02:50:57 +0000 (0:00:00.728) 0:00:38.187 ******** 2026-04-11 02:51:01.660030 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.660042 | orchestrator | 2026-04-11 02:51:01.660054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.660066 | orchestrator | Saturday 11 April 2026 02:50:57 +0000 (0:00:00.213) 0:00:38.400 ******** 2026-04-11 02:51:01.660078 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.660091 | orchestrator | 2026-04-11 02:51:01.660102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:01.660115 | orchestrator | Saturday 11 April 2026 02:50:57 +0000 (0:00:00.260) 0:00:38.661 ******** 2026-04-11 02:51:01.660127 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.660139 | orchestrator | 2026-04-11 02:51:01.660151 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-11 02:51:01.660163 | orchestrator | Saturday 11 April 2026 02:50:57 +0000 (0:00:00.234) 0:00:38.896 ******** 2026-04-11 02:51:01.660175 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.660188 | orchestrator | 2026-04-11 02:51:01.660200 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-11 02:51:01.660212 | orchestrator | Saturday 11 April 2026 02:50:58 +0000 (0:00:00.168) 0:00:39.064 ******** 2026-04-11 02:51:01.660225 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4afe3055-abd0-5615-b44c-a776d8127855'}}) 2026-04-11 02:51:01.660237 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c2bdb62-89ba-5856-b2e0-5db351397ca2'}}) 2026-04-11 02:51:01.660250 | orchestrator | 2026-04-11 02:51:01.660262 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-11 02:51:01.660274 | orchestrator | Saturday 11 April 2026 02:50:58 +0000 (0:00:00.236) 0:00:39.301 ******** 2026-04-11 02:51:01.660287 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}) 2026-04-11 02:51:01.660301 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}) 2026-04-11 02:51:01.660313 | orchestrator | 2026-04-11 02:51:01.660326 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-11 02:51:01.660338 | orchestrator | Saturday 11 April 2026 02:51:00 +0000 (0:00:01.844) 0:00:41.145 ******** 2026-04-11 02:51:01.660350 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:01.660364 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:01.660376 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:01.660389 | orchestrator | 2026-04-11 02:51:01.660401 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-11 02:51:01.660413 | orchestrator | Saturday 11 April 2026 02:51:00 +0000 (0:00:00.166) 0:00:41.312 ******** 2026-04-11 02:51:01.660426 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}) 2026-04-11 02:51:01.660459 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}) 2026-04-11 02:51:07.928798 | orchestrator | 2026-04-11 02:51:07.928927 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-11 02:51:07.928956 | orchestrator | Saturday 11 April 2026 02:51:01 +0000 (0:00:01.381) 0:00:42.694 ******** 2026-04-11 02:51:07.928976 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:07.928996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:07.929015 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929034 | orchestrator | 2026-04-11 02:51:07.929071 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-11 02:51:07.929091 | orchestrator | Saturday 11 April 2026 02:51:01 +0000 (0:00:00.168) 0:00:42.862 ******** 2026-04-11 02:51:07.929111 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929128 | orchestrator | 2026-04-11 02:51:07.929144 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-11 02:51:07.929154 | orchestrator | Saturday 11 April 2026 02:51:01 +0000 (0:00:00.148) 0:00:43.011 ******** 2026-04-11 02:51:07.929164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:07.929174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:07.929184 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929194 | orchestrator | 2026-04-11 02:51:07.929204 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-11 02:51:07.929214 | orchestrator | Saturday 11 April 2026 02:51:02 +0000 (0:00:00.165) 0:00:43.177 ******** 2026-04-11 02:51:07.929230 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929246 | orchestrator | 2026-04-11 02:51:07.929262 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-11 02:51:07.929279 | orchestrator | Saturday 11 April 2026 02:51:02 +0000 (0:00:00.180) 0:00:43.357 ******** 2026-04-11 02:51:07.929297 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:07.929315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:07.929331 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929349 | orchestrator | 2026-04-11 02:51:07.929366 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-11 02:51:07.929383 | orchestrator | Saturday 11 April 2026 02:51:02 +0000 (0:00:00.432) 0:00:43.790 ******** 2026-04-11 02:51:07.929400 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929416 | orchestrator | 2026-04-11 02:51:07.929434 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-11 02:51:07.929452 | orchestrator | Saturday 11 April 2026 02:51:02 +0000 (0:00:00.161) 0:00:43.951 ******** 2026-04-11 02:51:07.929469 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:07.929486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:07.929503 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929519 | orchestrator | 2026-04-11 02:51:07.929535 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-11 02:51:07.929585 | orchestrator | Saturday 11 April 2026 02:51:03 +0000 (0:00:00.189) 0:00:44.141 ******** 2026-04-11 02:51:07.929602 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:51:07.929619 | orchestrator | 2026-04-11 02:51:07.929635 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-11 02:51:07.929651 | orchestrator | Saturday 11 April 2026 02:51:03 +0000 (0:00:00.153) 0:00:44.294 ******** 2026-04-11 02:51:07.929666 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:07.929682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:07.929698 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929713 | orchestrator | 2026-04-11 02:51:07.929728 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-11 02:51:07.929744 | orchestrator | Saturday 11 April 2026 02:51:03 +0000 (0:00:00.167) 0:00:44.462 ******** 2026-04-11 02:51:07.929788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:07.929806 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:07.929824 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.929841 | orchestrator | 2026-04-11 02:51:07.929858 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-11 02:51:07.929900 | orchestrator | Saturday 11 April 2026 02:51:03 +0000 (0:00:00.170) 0:00:44.633 ******** 2026-04-11 02:51:07.929919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:07.929954 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:07.929973 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.930004 | orchestrator | 2026-04-11 02:51:07.930094 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-11 02:51:07.930117 | orchestrator | Saturday 11 April 2026 02:51:03 +0000 (0:00:00.165) 0:00:44.799 ******** 2026-04-11 02:51:07.930146 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.930164 | orchestrator | 2026-04-11 02:51:07.930182 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-11 02:51:07.930199 | orchestrator | Saturday 11 April 2026 02:51:03 +0000 (0:00:00.159) 0:00:44.958 ******** 2026-04-11 02:51:07.930217 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.930234 | orchestrator | 2026-04-11 02:51:07.930253 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-11 02:51:07.930270 | orchestrator | Saturday 11 April 2026 02:51:04 +0000 (0:00:00.186) 0:00:45.145 ******** 2026-04-11 02:51:07.930287 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.930304 | orchestrator | 2026-04-11 02:51:07.930321 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-11 02:51:07.930339 | orchestrator | Saturday 11 April 2026 02:51:04 +0000 (0:00:00.156) 0:00:45.301 ******** 2026-04-11 02:51:07.930357 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 02:51:07.930375 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-11 02:51:07.930392 | orchestrator | } 2026-04-11 02:51:07.930410 | orchestrator | 2026-04-11 02:51:07.930428 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-11 02:51:07.930445 | orchestrator | Saturday 11 April 2026 02:51:04 +0000 (0:00:00.172) 0:00:45.473 ******** 2026-04-11 02:51:07.930463 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 02:51:07.930480 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-11 02:51:07.930510 | orchestrator | } 2026-04-11 02:51:07.930521 | orchestrator | 2026-04-11 02:51:07.930530 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-11 02:51:07.930540 | orchestrator | Saturday 11 April 2026 02:51:04 +0000 (0:00:00.223) 0:00:45.696 ******** 2026-04-11 02:51:07.930549 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 02:51:07.930559 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-11 02:51:07.930569 | orchestrator | } 2026-04-11 02:51:07.930578 | orchestrator | 2026-04-11 02:51:07.930588 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-11 02:51:07.930597 | orchestrator | Saturday 11 April 2026 02:51:05 +0000 (0:00:00.401) 0:00:46.098 ******** 2026-04-11 02:51:07.930607 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:51:07.930617 | orchestrator | 2026-04-11 02:51:07.930626 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-11 02:51:07.930636 | orchestrator | Saturday 11 April 2026 02:51:05 +0000 (0:00:00.525) 0:00:46.623 ******** 2026-04-11 02:51:07.930645 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:51:07.930655 | orchestrator | 2026-04-11 02:51:07.930665 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-11 02:51:07.930674 | orchestrator | Saturday 11 April 2026 02:51:06 +0000 (0:00:00.540) 0:00:47.163 ******** 2026-04-11 02:51:07.930684 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:51:07.930694 | orchestrator | 2026-04-11 02:51:07.930704 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-11 02:51:07.930713 | orchestrator | Saturday 11 April 2026 02:51:06 +0000 (0:00:00.576) 0:00:47.740 ******** 2026-04-11 02:51:07.930723 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:51:07.930733 | orchestrator | 2026-04-11 02:51:07.930742 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-11 02:51:07.930783 | orchestrator | Saturday 11 April 2026 02:51:06 +0000 (0:00:00.193) 0:00:47.934 ******** 2026-04-11 02:51:07.930799 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.930809 | orchestrator | 2026-04-11 02:51:07.930818 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-11 02:51:07.930829 | orchestrator | Saturday 11 April 2026 02:51:07 +0000 (0:00:00.140) 0:00:48.075 ******** 2026-04-11 02:51:07.930838 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.930848 | orchestrator | 2026-04-11 02:51:07.930857 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-11 02:51:07.930867 | orchestrator | Saturday 11 April 2026 02:51:07 +0000 (0:00:00.121) 0:00:48.196 ******** 2026-04-11 02:51:07.930877 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 02:51:07.930886 | orchestrator |  "vgs_report": { 2026-04-11 02:51:07.930898 | orchestrator |  "vg": [] 2026-04-11 02:51:07.930907 | orchestrator |  } 2026-04-11 02:51:07.930917 | orchestrator | } 2026-04-11 02:51:07.930927 | orchestrator | 2026-04-11 02:51:07.930937 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-11 02:51:07.930946 | orchestrator | Saturday 11 April 2026 02:51:07 +0000 (0:00:00.159) 0:00:48.356 ******** 2026-04-11 02:51:07.930956 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.930965 | orchestrator | 2026-04-11 02:51:07.930975 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-11 02:51:07.930985 | orchestrator | Saturday 11 April 2026 02:51:07 +0000 (0:00:00.167) 0:00:48.523 ******** 2026-04-11 02:51:07.930996 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.931013 | orchestrator | 2026-04-11 02:51:07.931029 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-11 02:51:07.931045 | orchestrator | Saturday 11 April 2026 02:51:07 +0000 (0:00:00.155) 0:00:48.679 ******** 2026-04-11 02:51:07.931061 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.931077 | orchestrator | 2026-04-11 02:51:07.931094 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-11 02:51:07.931112 | orchestrator | Saturday 11 April 2026 02:51:07 +0000 (0:00:00.137) 0:00:48.817 ******** 2026-04-11 02:51:07.931143 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:07.931160 | orchestrator | 2026-04-11 02:51:07.931194 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-11 02:51:13.232046 | orchestrator | Saturday 11 April 2026 02:51:07 +0000 (0:00:00.153) 0:00:48.970 ******** 2026-04-11 02:51:13.232134 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232144 | orchestrator | 2026-04-11 02:51:13.232153 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-11 02:51:13.232160 | orchestrator | Saturday 11 April 2026 02:51:08 +0000 (0:00:00.397) 0:00:49.368 ******** 2026-04-11 02:51:13.232167 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232173 | orchestrator | 2026-04-11 02:51:13.232180 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-11 02:51:13.232187 | orchestrator | Saturday 11 April 2026 02:51:08 +0000 (0:00:00.189) 0:00:49.557 ******** 2026-04-11 02:51:13.232193 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232200 | orchestrator | 2026-04-11 02:51:13.232218 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-11 02:51:13.232225 | orchestrator | Saturday 11 April 2026 02:51:08 +0000 (0:00:00.155) 0:00:49.713 ******** 2026-04-11 02:51:13.232235 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232246 | orchestrator | 2026-04-11 02:51:13.232262 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-11 02:51:13.232274 | orchestrator | Saturday 11 April 2026 02:51:08 +0000 (0:00:00.163) 0:00:49.876 ******** 2026-04-11 02:51:13.232284 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232295 | orchestrator | 2026-04-11 02:51:13.232305 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-11 02:51:13.232315 | orchestrator | Saturday 11 April 2026 02:51:08 +0000 (0:00:00.143) 0:00:50.020 ******** 2026-04-11 02:51:13.232326 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232337 | orchestrator | 2026-04-11 02:51:13.232348 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-11 02:51:13.232360 | orchestrator | Saturday 11 April 2026 02:51:09 +0000 (0:00:00.173) 0:00:50.193 ******** 2026-04-11 02:51:13.232371 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232382 | orchestrator | 2026-04-11 02:51:13.232393 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-11 02:51:13.232406 | orchestrator | Saturday 11 April 2026 02:51:09 +0000 (0:00:00.159) 0:00:50.352 ******** 2026-04-11 02:51:13.232414 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232420 | orchestrator | 2026-04-11 02:51:13.232427 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-11 02:51:13.232436 | orchestrator | Saturday 11 April 2026 02:51:09 +0000 (0:00:00.159) 0:00:50.512 ******** 2026-04-11 02:51:13.232446 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232456 | orchestrator | 2026-04-11 02:51:13.232466 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-11 02:51:13.232477 | orchestrator | Saturday 11 April 2026 02:51:09 +0000 (0:00:00.161) 0:00:50.673 ******** 2026-04-11 02:51:13.232487 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232497 | orchestrator | 2026-04-11 02:51:13.232507 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-11 02:51:13.232518 | orchestrator | Saturday 11 April 2026 02:51:09 +0000 (0:00:00.136) 0:00:50.810 ******** 2026-04-11 02:51:13.232530 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.232542 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.232553 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232564 | orchestrator | 2026-04-11 02:51:13.232574 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-11 02:51:13.232611 | orchestrator | Saturday 11 April 2026 02:51:09 +0000 (0:00:00.168) 0:00:50.978 ******** 2026-04-11 02:51:13.232623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.232635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.232645 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232656 | orchestrator | 2026-04-11 02:51:13.232667 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-11 02:51:13.232678 | orchestrator | Saturday 11 April 2026 02:51:10 +0000 (0:00:00.190) 0:00:51.168 ******** 2026-04-11 02:51:13.232688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.232699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.232706 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232713 | orchestrator | 2026-04-11 02:51:13.232720 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-11 02:51:13.232731 | orchestrator | Saturday 11 April 2026 02:51:10 +0000 (0:00:00.445) 0:00:51.614 ******** 2026-04-11 02:51:13.232742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.232752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.232822 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232834 | orchestrator | 2026-04-11 02:51:13.232865 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-11 02:51:13.232878 | orchestrator | Saturday 11 April 2026 02:51:10 +0000 (0:00:00.178) 0:00:51.793 ******** 2026-04-11 02:51:13.232890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.232901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.232913 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232925 | orchestrator | 2026-04-11 02:51:13.232944 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-11 02:51:13.232955 | orchestrator | Saturday 11 April 2026 02:51:10 +0000 (0:00:00.177) 0:00:51.971 ******** 2026-04-11 02:51:13.232966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.232977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.232988 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.232999 | orchestrator | 2026-04-11 02:51:13.233011 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-11 02:51:13.233022 | orchestrator | Saturday 11 April 2026 02:51:11 +0000 (0:00:00.181) 0:00:52.152 ******** 2026-04-11 02:51:13.233032 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.233044 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.233062 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.233084 | orchestrator | 2026-04-11 02:51:13.233091 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-11 02:51:13.233097 | orchestrator | Saturday 11 April 2026 02:51:11 +0000 (0:00:00.165) 0:00:52.318 ******** 2026-04-11 02:51:13.233103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.233109 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.233116 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.233122 | orchestrator | 2026-04-11 02:51:13.233129 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-11 02:51:13.233140 | orchestrator | Saturday 11 April 2026 02:51:11 +0000 (0:00:00.160) 0:00:52.479 ******** 2026-04-11 02:51:13.233150 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:51:13.233162 | orchestrator | 2026-04-11 02:51:13.233173 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-11 02:51:13.233184 | orchestrator | Saturday 11 April 2026 02:51:11 +0000 (0:00:00.535) 0:00:53.014 ******** 2026-04-11 02:51:13.233195 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:51:13.233202 | orchestrator | 2026-04-11 02:51:13.233208 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-11 02:51:13.233214 | orchestrator | Saturday 11 April 2026 02:51:12 +0000 (0:00:00.560) 0:00:53.575 ******** 2026-04-11 02:51:13.233220 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:51:13.233226 | orchestrator | 2026-04-11 02:51:13.233233 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-11 02:51:13.233239 | orchestrator | Saturday 11 April 2026 02:51:12 +0000 (0:00:00.155) 0:00:53.731 ******** 2026-04-11 02:51:13.233245 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'vg_name': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}) 2026-04-11 02:51:13.233253 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'vg_name': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}) 2026-04-11 02:51:13.233259 | orchestrator | 2026-04-11 02:51:13.233265 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-11 02:51:13.233271 | orchestrator | Saturday 11 April 2026 02:51:12 +0000 (0:00:00.177) 0:00:53.908 ******** 2026-04-11 02:51:13.233277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.233283 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:13.233290 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:13.233296 | orchestrator | 2026-04-11 02:51:13.233302 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-11 02:51:13.233308 | orchestrator | Saturday 11 April 2026 02:51:13 +0000 (0:00:00.173) 0:00:54.082 ******** 2026-04-11 02:51:13.233314 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:13.233327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:20.639556 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:20.639701 | orchestrator | 2026-04-11 02:51:20.639741 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-11 02:51:20.639759 | orchestrator | Saturday 11 April 2026 02:51:13 +0000 (0:00:00.191) 0:00:54.274 ******** 2026-04-11 02:51:20.639836 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 02:51:20.639906 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 02:51:20.639922 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:51:20.639932 | orchestrator | 2026-04-11 02:51:20.639941 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-11 02:51:20.639950 | orchestrator | Saturday 11 April 2026 02:51:13 +0000 (0:00:00.410) 0:00:54.684 ******** 2026-04-11 02:51:20.639959 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 02:51:20.639968 | orchestrator |  "lvm_report": { 2026-04-11 02:51:20.639979 | orchestrator |  "lv": [ 2026-04-11 02:51:20.639988 | orchestrator |  { 2026-04-11 02:51:20.639997 | orchestrator |  "lv_name": "osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2", 2026-04-11 02:51:20.640006 | orchestrator |  "vg_name": "ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2" 2026-04-11 02:51:20.640015 | orchestrator |  }, 2026-04-11 02:51:20.640024 | orchestrator |  { 2026-04-11 02:51:20.640033 | orchestrator |  "lv_name": "osd-block-4afe3055-abd0-5615-b44c-a776d8127855", 2026-04-11 02:51:20.640041 | orchestrator |  "vg_name": "ceph-4afe3055-abd0-5615-b44c-a776d8127855" 2026-04-11 02:51:20.640050 | orchestrator |  } 2026-04-11 02:51:20.640059 | orchestrator |  ], 2026-04-11 02:51:20.640067 | orchestrator |  "pv": [ 2026-04-11 02:51:20.640076 | orchestrator |  { 2026-04-11 02:51:20.640086 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-11 02:51:20.640096 | orchestrator |  "vg_name": "ceph-4afe3055-abd0-5615-b44c-a776d8127855" 2026-04-11 02:51:20.640107 | orchestrator |  }, 2026-04-11 02:51:20.640117 | orchestrator |  { 2026-04-11 02:51:20.640127 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-11 02:51:20.640137 | orchestrator |  "vg_name": "ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2" 2026-04-11 02:51:20.640147 | orchestrator |  } 2026-04-11 02:51:20.640157 | orchestrator |  ] 2026-04-11 02:51:20.640167 | orchestrator |  } 2026-04-11 02:51:20.640176 | orchestrator | } 2026-04-11 02:51:20.640184 | orchestrator | 2026-04-11 02:51:20.640193 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-11 02:51:20.640202 | orchestrator | 2026-04-11 02:51:20.640211 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 02:51:20.640219 | orchestrator | Saturday 11 April 2026 02:51:13 +0000 (0:00:00.338) 0:00:55.022 ******** 2026-04-11 02:51:20.640228 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-11 02:51:20.640237 | orchestrator | 2026-04-11 02:51:20.640245 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-11 02:51:20.640254 | orchestrator | Saturday 11 April 2026 02:51:14 +0000 (0:00:00.316) 0:00:55.339 ******** 2026-04-11 02:51:20.640263 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:20.640271 | orchestrator | 2026-04-11 02:51:20.640280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640289 | orchestrator | Saturday 11 April 2026 02:51:14 +0000 (0:00:00.287) 0:00:55.627 ******** 2026-04-11 02:51:20.640297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-11 02:51:20.640306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-11 02:51:20.640314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-11 02:51:20.640323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-11 02:51:20.640331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-11 02:51:20.640340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-11 02:51:20.640349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-11 02:51:20.640367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-11 02:51:20.640375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-11 02:51:20.640384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-11 02:51:20.640393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-11 02:51:20.640401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-11 02:51:20.640410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-11 02:51:20.640419 | orchestrator | 2026-04-11 02:51:20.640427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640436 | orchestrator | Saturday 11 April 2026 02:51:15 +0000 (0:00:00.526) 0:00:56.153 ******** 2026-04-11 02:51:20.640444 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:20.640453 | orchestrator | 2026-04-11 02:51:20.640462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640470 | orchestrator | Saturday 11 April 2026 02:51:15 +0000 (0:00:00.245) 0:00:56.398 ******** 2026-04-11 02:51:20.640479 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:20.640487 | orchestrator | 2026-04-11 02:51:20.640496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640522 | orchestrator | Saturday 11 April 2026 02:51:15 +0000 (0:00:00.238) 0:00:56.637 ******** 2026-04-11 02:51:20.640532 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:20.640540 | orchestrator | 2026-04-11 02:51:20.640549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640557 | orchestrator | Saturday 11 April 2026 02:51:15 +0000 (0:00:00.228) 0:00:56.866 ******** 2026-04-11 02:51:20.640566 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:20.640575 | orchestrator | 2026-04-11 02:51:20.640583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640592 | orchestrator | Saturday 11 April 2026 02:51:16 +0000 (0:00:00.747) 0:00:57.613 ******** 2026-04-11 02:51:20.640601 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:20.640610 | orchestrator | 2026-04-11 02:51:20.640618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640627 | orchestrator | Saturday 11 April 2026 02:51:16 +0000 (0:00:00.233) 0:00:57.847 ******** 2026-04-11 02:51:20.640636 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:20.640644 | orchestrator | 2026-04-11 02:51:20.640653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640661 | orchestrator | Saturday 11 April 2026 02:51:17 +0000 (0:00:00.296) 0:00:58.143 ******** 2026-04-11 02:51:20.640670 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:20.640681 | orchestrator | 2026-04-11 02:51:20.640695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640710 | orchestrator | Saturday 11 April 2026 02:51:17 +0000 (0:00:00.232) 0:00:58.375 ******** 2026-04-11 02:51:20.640732 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:20.640747 | orchestrator | 2026-04-11 02:51:20.640761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640800 | orchestrator | Saturday 11 April 2026 02:51:17 +0000 (0:00:00.232) 0:00:58.608 ******** 2026-04-11 02:51:20.640815 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20) 2026-04-11 02:51:20.640830 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20) 2026-04-11 02:51:20.640844 | orchestrator | 2026-04-11 02:51:20.640858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640872 | orchestrator | Saturday 11 April 2026 02:51:18 +0000 (0:00:00.513) 0:00:59.122 ******** 2026-04-11 02:51:20.640928 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3) 2026-04-11 02:51:20.640957 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3) 2026-04-11 02:51:20.640969 | orchestrator | 2026-04-11 02:51:20.640978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.640987 | orchestrator | Saturday 11 April 2026 02:51:18 +0000 (0:00:00.479) 0:00:59.601 ******** 2026-04-11 02:51:20.640995 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78) 2026-04-11 02:51:20.641004 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78) 2026-04-11 02:51:20.641013 | orchestrator | 2026-04-11 02:51:20.641022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.641030 | orchestrator | Saturday 11 April 2026 02:51:19 +0000 (0:00:00.474) 0:01:00.076 ******** 2026-04-11 02:51:20.641039 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735) 2026-04-11 02:51:20.641048 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735) 2026-04-11 02:51:20.641057 | orchestrator | 2026-04-11 02:51:20.641066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-11 02:51:20.641074 | orchestrator | Saturday 11 April 2026 02:51:19 +0000 (0:00:00.505) 0:01:00.582 ******** 2026-04-11 02:51:20.641083 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-11 02:51:20.641092 | orchestrator | 2026-04-11 02:51:20.641100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:20.641109 | orchestrator | Saturday 11 April 2026 02:51:19 +0000 (0:00:00.384) 0:01:00.966 ******** 2026-04-11 02:51:20.641118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-11 02:51:20.641126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-11 02:51:20.641135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-11 02:51:20.641144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-11 02:51:20.641153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-11 02:51:20.641161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-11 02:51:20.641170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-11 02:51:20.641178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-11 02:51:20.641192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-11 02:51:20.641207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-11 02:51:20.641222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-11 02:51:20.641247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-11 02:51:30.944635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-11 02:51:30.944741 | orchestrator | 2026-04-11 02:51:30.944754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.944762 | orchestrator | Saturday 11 April 2026 02:51:20 +0000 (0:00:00.706) 0:01:01.673 ******** 2026-04-11 02:51:30.944770 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.944777 | orchestrator | 2026-04-11 02:51:30.944832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.944857 | orchestrator | Saturday 11 April 2026 02:51:20 +0000 (0:00:00.234) 0:01:01.908 ******** 2026-04-11 02:51:30.944864 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.944893 | orchestrator | 2026-04-11 02:51:30.944900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.944907 | orchestrator | Saturday 11 April 2026 02:51:21 +0000 (0:00:00.240) 0:01:02.148 ******** 2026-04-11 02:51:30.944914 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.944921 | orchestrator | 2026-04-11 02:51:30.944928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.944935 | orchestrator | Saturday 11 April 2026 02:51:21 +0000 (0:00:00.239) 0:01:02.388 ******** 2026-04-11 02:51:30.944942 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.944949 | orchestrator | 2026-04-11 02:51:30.944955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.944962 | orchestrator | Saturday 11 April 2026 02:51:21 +0000 (0:00:00.224) 0:01:02.613 ******** 2026-04-11 02:51:30.944969 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.944976 | orchestrator | 2026-04-11 02:51:30.944983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.944990 | orchestrator | Saturday 11 April 2026 02:51:21 +0000 (0:00:00.272) 0:01:02.885 ******** 2026-04-11 02:51:30.944997 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945004 | orchestrator | 2026-04-11 02:51:30.945011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.945018 | orchestrator | Saturday 11 April 2026 02:51:22 +0000 (0:00:00.242) 0:01:03.128 ******** 2026-04-11 02:51:30.945025 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945032 | orchestrator | 2026-04-11 02:51:30.945039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.945046 | orchestrator | Saturday 11 April 2026 02:51:22 +0000 (0:00:00.238) 0:01:03.366 ******** 2026-04-11 02:51:30.945054 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945061 | orchestrator | 2026-04-11 02:51:30.945068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.945075 | orchestrator | Saturday 11 April 2026 02:51:22 +0000 (0:00:00.233) 0:01:03.600 ******** 2026-04-11 02:51:30.945082 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-11 02:51:30.945090 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-11 02:51:30.945098 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-11 02:51:30.945104 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-11 02:51:30.945111 | orchestrator | 2026-04-11 02:51:30.945119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.945126 | orchestrator | Saturday 11 April 2026 02:51:23 +0000 (0:00:01.047) 0:01:04.648 ******** 2026-04-11 02:51:30.945133 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945139 | orchestrator | 2026-04-11 02:51:30.945146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.945153 | orchestrator | Saturday 11 April 2026 02:51:24 +0000 (0:00:00.906) 0:01:05.554 ******** 2026-04-11 02:51:30.945160 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945167 | orchestrator | 2026-04-11 02:51:30.945174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.945181 | orchestrator | Saturday 11 April 2026 02:51:24 +0000 (0:00:00.235) 0:01:05.789 ******** 2026-04-11 02:51:30.945188 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945196 | orchestrator | 2026-04-11 02:51:30.945204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-11 02:51:30.945212 | orchestrator | Saturday 11 April 2026 02:51:24 +0000 (0:00:00.249) 0:01:06.039 ******** 2026-04-11 02:51:30.945220 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945229 | orchestrator | 2026-04-11 02:51:30.945236 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-11 02:51:30.945245 | orchestrator | Saturday 11 April 2026 02:51:25 +0000 (0:00:00.237) 0:01:06.276 ******** 2026-04-11 02:51:30.945253 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945261 | orchestrator | 2026-04-11 02:51:30.945276 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-11 02:51:30.945283 | orchestrator | Saturday 11 April 2026 02:51:25 +0000 (0:00:00.167) 0:01:06.444 ******** 2026-04-11 02:51:30.945310 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e8a3f20d-ed3f-5f34-b319-d0862efd8412'}}) 2026-04-11 02:51:30.945319 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a718c651-a264-5d59-a3a1-3dddb23bb056'}}) 2026-04-11 02:51:30.945326 | orchestrator | 2026-04-11 02:51:30.945334 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-11 02:51:30.945343 | orchestrator | Saturday 11 April 2026 02:51:25 +0000 (0:00:00.203) 0:01:06.647 ******** 2026-04-11 02:51:30.945352 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}) 2026-04-11 02:51:30.945362 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}) 2026-04-11 02:51:30.945370 | orchestrator | 2026-04-11 02:51:30.945378 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-11 02:51:30.945403 | orchestrator | Saturday 11 April 2026 02:51:27 +0000 (0:00:01.957) 0:01:08.605 ******** 2026-04-11 02:51:30.945411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:30.945421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:30.945428 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945437 | orchestrator | 2026-04-11 02:51:30.945450 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-11 02:51:30.945458 | orchestrator | Saturday 11 April 2026 02:51:27 +0000 (0:00:00.202) 0:01:08.808 ******** 2026-04-11 02:51:30.945467 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}) 2026-04-11 02:51:30.945476 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}) 2026-04-11 02:51:30.945484 | orchestrator | 2026-04-11 02:51:30.945492 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-11 02:51:30.945500 | orchestrator | Saturday 11 April 2026 02:51:29 +0000 (0:00:01.372) 0:01:10.180 ******** 2026-04-11 02:51:30.945508 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:30.945516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:30.945524 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945532 | orchestrator | 2026-04-11 02:51:30.945540 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-11 02:51:30.945549 | orchestrator | Saturday 11 April 2026 02:51:29 +0000 (0:00:00.163) 0:01:10.343 ******** 2026-04-11 02:51:30.945557 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945565 | orchestrator | 2026-04-11 02:51:30.945572 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-11 02:51:30.945580 | orchestrator | Saturday 11 April 2026 02:51:29 +0000 (0:00:00.179) 0:01:10.523 ******** 2026-04-11 02:51:30.945588 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:30.945596 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:30.945612 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945619 | orchestrator | 2026-04-11 02:51:30.945626 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-11 02:51:30.945633 | orchestrator | Saturday 11 April 2026 02:51:29 +0000 (0:00:00.404) 0:01:10.928 ******** 2026-04-11 02:51:30.945640 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945647 | orchestrator | 2026-04-11 02:51:30.945654 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-11 02:51:30.945661 | orchestrator | Saturday 11 April 2026 02:51:30 +0000 (0:00:00.166) 0:01:11.094 ******** 2026-04-11 02:51:30.945668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:30.945675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:30.945683 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945690 | orchestrator | 2026-04-11 02:51:30.945697 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-11 02:51:30.945704 | orchestrator | Saturday 11 April 2026 02:51:30 +0000 (0:00:00.204) 0:01:11.299 ******** 2026-04-11 02:51:30.945711 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945719 | orchestrator | 2026-04-11 02:51:30.945726 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-11 02:51:30.945733 | orchestrator | Saturday 11 April 2026 02:51:30 +0000 (0:00:00.164) 0:01:11.463 ******** 2026-04-11 02:51:30.945740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:30.945747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:30.945754 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:30.945761 | orchestrator | 2026-04-11 02:51:30.945768 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-11 02:51:30.945775 | orchestrator | Saturday 11 April 2026 02:51:30 +0000 (0:00:00.179) 0:01:11.642 ******** 2026-04-11 02:51:30.945803 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:30.945811 | orchestrator | 2026-04-11 02:51:30.945818 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-11 02:51:30.945825 | orchestrator | Saturday 11 April 2026 02:51:30 +0000 (0:00:00.167) 0:01:11.810 ******** 2026-04-11 02:51:30.945838 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:38.205736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:38.205900 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.205912 | orchestrator | 2026-04-11 02:51:38.205919 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-11 02:51:38.205926 | orchestrator | Saturday 11 April 2026 02:51:30 +0000 (0:00:00.176) 0:01:11.986 ******** 2026-04-11 02:51:38.205943 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:38.205949 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:38.205953 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.205958 | orchestrator | 2026-04-11 02:51:38.205963 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-11 02:51:38.205967 | orchestrator | Saturday 11 April 2026 02:51:31 +0000 (0:00:00.169) 0:01:12.155 ******** 2026-04-11 02:51:38.205989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:38.205994 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:38.205999 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206003 | orchestrator | 2026-04-11 02:51:38.206008 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-11 02:51:38.206012 | orchestrator | Saturday 11 April 2026 02:51:31 +0000 (0:00:00.222) 0:01:12.378 ******** 2026-04-11 02:51:38.206061 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206066 | orchestrator | 2026-04-11 02:51:38.206071 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-11 02:51:38.206075 | orchestrator | Saturday 11 April 2026 02:51:31 +0000 (0:00:00.158) 0:01:12.536 ******** 2026-04-11 02:51:38.206080 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206085 | orchestrator | 2026-04-11 02:51:38.206090 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-11 02:51:38.206095 | orchestrator | Saturday 11 April 2026 02:51:31 +0000 (0:00:00.164) 0:01:12.701 ******** 2026-04-11 02:51:38.206100 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206104 | orchestrator | 2026-04-11 02:51:38.206109 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-11 02:51:38.206113 | orchestrator | Saturday 11 April 2026 02:51:32 +0000 (0:00:00.428) 0:01:13.130 ******** 2026-04-11 02:51:38.206118 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 02:51:38.206123 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-11 02:51:38.206128 | orchestrator | } 2026-04-11 02:51:38.206133 | orchestrator | 2026-04-11 02:51:38.206138 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-11 02:51:38.206142 | orchestrator | Saturday 11 April 2026 02:51:32 +0000 (0:00:00.168) 0:01:13.298 ******** 2026-04-11 02:51:38.206147 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 02:51:38.206151 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-11 02:51:38.206156 | orchestrator | } 2026-04-11 02:51:38.206161 | orchestrator | 2026-04-11 02:51:38.206165 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-11 02:51:38.206170 | orchestrator | Saturday 11 April 2026 02:51:32 +0000 (0:00:00.158) 0:01:13.457 ******** 2026-04-11 02:51:38.206174 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 02:51:38.206179 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-11 02:51:38.206184 | orchestrator | } 2026-04-11 02:51:38.206188 | orchestrator | 2026-04-11 02:51:38.206193 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-11 02:51:38.206197 | orchestrator | Saturday 11 April 2026 02:51:32 +0000 (0:00:00.172) 0:01:13.629 ******** 2026-04-11 02:51:38.206202 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:38.206207 | orchestrator | 2026-04-11 02:51:38.206211 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-11 02:51:38.206216 | orchestrator | Saturday 11 April 2026 02:51:33 +0000 (0:00:00.570) 0:01:14.199 ******** 2026-04-11 02:51:38.206221 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:38.206225 | orchestrator | 2026-04-11 02:51:38.206230 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-11 02:51:38.206234 | orchestrator | Saturday 11 April 2026 02:51:33 +0000 (0:00:00.565) 0:01:14.765 ******** 2026-04-11 02:51:38.206239 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:38.206243 | orchestrator | 2026-04-11 02:51:38.206248 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-11 02:51:38.206253 | orchestrator | Saturday 11 April 2026 02:51:34 +0000 (0:00:00.540) 0:01:15.306 ******** 2026-04-11 02:51:38.206257 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:38.206262 | orchestrator | 2026-04-11 02:51:38.206266 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-11 02:51:38.206276 | orchestrator | Saturday 11 April 2026 02:51:34 +0000 (0:00:00.158) 0:01:15.464 ******** 2026-04-11 02:51:38.206280 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206285 | orchestrator | 2026-04-11 02:51:38.206290 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-11 02:51:38.206296 | orchestrator | Saturday 11 April 2026 02:51:34 +0000 (0:00:00.126) 0:01:15.591 ******** 2026-04-11 02:51:38.206301 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206306 | orchestrator | 2026-04-11 02:51:38.206311 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-11 02:51:38.206317 | orchestrator | Saturday 11 April 2026 02:51:34 +0000 (0:00:00.133) 0:01:15.724 ******** 2026-04-11 02:51:38.206322 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 02:51:38.206328 | orchestrator |  "vgs_report": { 2026-04-11 02:51:38.206333 | orchestrator |  "vg": [] 2026-04-11 02:51:38.206351 | orchestrator |  } 2026-04-11 02:51:38.206357 | orchestrator | } 2026-04-11 02:51:38.206362 | orchestrator | 2026-04-11 02:51:38.206368 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-11 02:51:38.206373 | orchestrator | Saturday 11 April 2026 02:51:34 +0000 (0:00:00.166) 0:01:15.891 ******** 2026-04-11 02:51:38.206379 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206384 | orchestrator | 2026-04-11 02:51:38.206389 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-11 02:51:38.206397 | orchestrator | Saturday 11 April 2026 02:51:34 +0000 (0:00:00.153) 0:01:16.044 ******** 2026-04-11 02:51:38.206402 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206407 | orchestrator | 2026-04-11 02:51:38.206411 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-11 02:51:38.206416 | orchestrator | Saturday 11 April 2026 02:51:35 +0000 (0:00:00.410) 0:01:16.454 ******** 2026-04-11 02:51:38.206420 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206425 | orchestrator | 2026-04-11 02:51:38.206429 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-11 02:51:38.206434 | orchestrator | Saturday 11 April 2026 02:51:35 +0000 (0:00:00.170) 0:01:16.625 ******** 2026-04-11 02:51:38.206438 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206443 | orchestrator | 2026-04-11 02:51:38.206448 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-11 02:51:38.206452 | orchestrator | Saturday 11 April 2026 02:51:35 +0000 (0:00:00.156) 0:01:16.781 ******** 2026-04-11 02:51:38.206456 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206461 | orchestrator | 2026-04-11 02:51:38.206466 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-11 02:51:38.206470 | orchestrator | Saturday 11 April 2026 02:51:35 +0000 (0:00:00.158) 0:01:16.940 ******** 2026-04-11 02:51:38.206475 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206479 | orchestrator | 2026-04-11 02:51:38.206484 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-11 02:51:38.206488 | orchestrator | Saturday 11 April 2026 02:51:36 +0000 (0:00:00.174) 0:01:17.115 ******** 2026-04-11 02:51:38.206493 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206497 | orchestrator | 2026-04-11 02:51:38.206502 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-11 02:51:38.206506 | orchestrator | Saturday 11 April 2026 02:51:36 +0000 (0:00:00.159) 0:01:17.274 ******** 2026-04-11 02:51:38.206511 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206515 | orchestrator | 2026-04-11 02:51:38.206520 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-11 02:51:38.206525 | orchestrator | Saturday 11 April 2026 02:51:36 +0000 (0:00:00.142) 0:01:17.417 ******** 2026-04-11 02:51:38.206529 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206534 | orchestrator | 2026-04-11 02:51:38.206538 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-11 02:51:38.206543 | orchestrator | Saturday 11 April 2026 02:51:36 +0000 (0:00:00.149) 0:01:17.567 ******** 2026-04-11 02:51:38.206552 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206556 | orchestrator | 2026-04-11 02:51:38.206561 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-11 02:51:38.206565 | orchestrator | Saturday 11 April 2026 02:51:36 +0000 (0:00:00.178) 0:01:17.746 ******** 2026-04-11 02:51:38.206570 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206574 | orchestrator | 2026-04-11 02:51:38.206579 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-11 02:51:38.206583 | orchestrator | Saturday 11 April 2026 02:51:36 +0000 (0:00:00.179) 0:01:17.925 ******** 2026-04-11 02:51:38.206588 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206592 | orchestrator | 2026-04-11 02:51:38.206597 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-11 02:51:38.206601 | orchestrator | Saturday 11 April 2026 02:51:37 +0000 (0:00:00.171) 0:01:18.097 ******** 2026-04-11 02:51:38.206606 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206610 | orchestrator | 2026-04-11 02:51:38.206615 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-11 02:51:38.206619 | orchestrator | Saturday 11 April 2026 02:51:37 +0000 (0:00:00.442) 0:01:18.540 ******** 2026-04-11 02:51:38.206624 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206628 | orchestrator | 2026-04-11 02:51:38.206633 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-11 02:51:38.206638 | orchestrator | Saturday 11 April 2026 02:51:37 +0000 (0:00:00.161) 0:01:18.701 ******** 2026-04-11 02:51:38.206642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:38.206647 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:38.206652 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206656 | orchestrator | 2026-04-11 02:51:38.206661 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-11 02:51:38.206665 | orchestrator | Saturday 11 April 2026 02:51:37 +0000 (0:00:00.177) 0:01:18.879 ******** 2026-04-11 02:51:38.206670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:38.206674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:38.206679 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:38.206683 | orchestrator | 2026-04-11 02:51:38.206688 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-11 02:51:38.206693 | orchestrator | Saturday 11 April 2026 02:51:38 +0000 (0:00:00.171) 0:01:19.050 ******** 2026-04-11 02:51:38.206700 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686353 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686363 | orchestrator | 2026-04-11 02:51:41.686382 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-11 02:51:41.686390 | orchestrator | Saturday 11 April 2026 02:51:38 +0000 (0:00:00.198) 0:01:19.249 ******** 2026-04-11 02:51:41.686396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686425 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686431 | orchestrator | 2026-04-11 02:51:41.686437 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-11 02:51:41.686444 | orchestrator | Saturday 11 April 2026 02:51:38 +0000 (0:00:00.190) 0:01:19.439 ******** 2026-04-11 02:51:41.686450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686456 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686462 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686468 | orchestrator | 2026-04-11 02:51:41.686473 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-11 02:51:41.686479 | orchestrator | Saturday 11 April 2026 02:51:38 +0000 (0:00:00.177) 0:01:19.616 ******** 2026-04-11 02:51:41.686485 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686497 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686503 | orchestrator | 2026-04-11 02:51:41.686509 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-11 02:51:41.686515 | orchestrator | Saturday 11 April 2026 02:51:38 +0000 (0:00:00.178) 0:01:19.795 ******** 2026-04-11 02:51:41.686521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686533 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686539 | orchestrator | 2026-04-11 02:51:41.686545 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-11 02:51:41.686551 | orchestrator | Saturday 11 April 2026 02:51:38 +0000 (0:00:00.177) 0:01:19.973 ******** 2026-04-11 02:51:41.686557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686568 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686574 | orchestrator | 2026-04-11 02:51:41.686580 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-11 02:51:41.686586 | orchestrator | Saturday 11 April 2026 02:51:39 +0000 (0:00:00.202) 0:01:20.176 ******** 2026-04-11 02:51:41.686592 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:41.686599 | orchestrator | 2026-04-11 02:51:41.686606 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-11 02:51:41.686611 | orchestrator | Saturday 11 April 2026 02:51:39 +0000 (0:00:00.551) 0:01:20.727 ******** 2026-04-11 02:51:41.686617 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:41.686623 | orchestrator | 2026-04-11 02:51:41.686629 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-11 02:51:41.686635 | orchestrator | Saturday 11 April 2026 02:51:40 +0000 (0:00:00.873) 0:01:21.600 ******** 2026-04-11 02:51:41.686641 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:51:41.686647 | orchestrator | 2026-04-11 02:51:41.686653 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-11 02:51:41.686659 | orchestrator | Saturday 11 April 2026 02:51:40 +0000 (0:00:00.174) 0:01:21.775 ******** 2026-04-11 02:51:41.686670 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'vg_name': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}) 2026-04-11 02:51:41.686678 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'vg_name': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}) 2026-04-11 02:51:41.686683 | orchestrator | 2026-04-11 02:51:41.686689 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-11 02:51:41.686695 | orchestrator | Saturday 11 April 2026 02:51:40 +0000 (0:00:00.192) 0:01:21.967 ******** 2026-04-11 02:51:41.686713 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686730 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686735 | orchestrator | 2026-04-11 02:51:41.686741 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-11 02:51:41.686747 | orchestrator | Saturday 11 April 2026 02:51:41 +0000 (0:00:00.180) 0:01:22.147 ******** 2026-04-11 02:51:41.686753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686765 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686771 | orchestrator | 2026-04-11 02:51:41.686776 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-11 02:51:41.686782 | orchestrator | Saturday 11 April 2026 02:51:41 +0000 (0:00:00.186) 0:01:22.334 ******** 2026-04-11 02:51:41.686788 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 02:51:41.686817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 02:51:41.686825 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:51:41.686831 | orchestrator | 2026-04-11 02:51:41.686838 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-11 02:51:41.686845 | orchestrator | Saturday 11 April 2026 02:51:41 +0000 (0:00:00.192) 0:01:22.526 ******** 2026-04-11 02:51:41.686851 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 02:51:41.686858 | orchestrator |  "lvm_report": { 2026-04-11 02:51:41.686866 | orchestrator |  "lv": [ 2026-04-11 02:51:41.686873 | orchestrator |  { 2026-04-11 02:51:41.686880 | orchestrator |  "lv_name": "osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056", 2026-04-11 02:51:41.686888 | orchestrator |  "vg_name": "ceph-a718c651-a264-5d59-a3a1-3dddb23bb056" 2026-04-11 02:51:41.686894 | orchestrator |  }, 2026-04-11 02:51:41.686901 | orchestrator |  { 2026-04-11 02:51:41.686908 | orchestrator |  "lv_name": "osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412", 2026-04-11 02:51:41.686915 | orchestrator |  "vg_name": "ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412" 2026-04-11 02:51:41.686921 | orchestrator |  } 2026-04-11 02:51:41.686928 | orchestrator |  ], 2026-04-11 02:51:41.686934 | orchestrator |  "pv": [ 2026-04-11 02:51:41.686940 | orchestrator |  { 2026-04-11 02:51:41.686947 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-11 02:51:41.686954 | orchestrator |  "vg_name": "ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412" 2026-04-11 02:51:41.686960 | orchestrator |  }, 2026-04-11 02:51:41.686967 | orchestrator |  { 2026-04-11 02:51:41.686974 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-11 02:51:41.686989 | orchestrator |  "vg_name": "ceph-a718c651-a264-5d59-a3a1-3dddb23bb056" 2026-04-11 02:51:41.686996 | orchestrator |  } 2026-04-11 02:51:41.687003 | orchestrator |  ] 2026-04-11 02:51:41.687009 | orchestrator |  } 2026-04-11 02:51:41.687016 | orchestrator | } 2026-04-11 02:51:41.687024 | orchestrator | 2026-04-11 02:51:41.687030 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:51:41.687037 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-11 02:51:41.687044 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-11 02:51:41.687050 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-11 02:51:41.687057 | orchestrator | 2026-04-11 02:51:41.687064 | orchestrator | 2026-04-11 02:51:41.687070 | orchestrator | 2026-04-11 02:51:41.687077 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:51:41.687084 | orchestrator | Saturday 11 April 2026 02:51:41 +0000 (0:00:00.186) 0:01:22.713 ******** 2026-04-11 02:51:41.687090 | orchestrator | =============================================================================== 2026-04-11 02:51:41.687097 | orchestrator | Create block VGs -------------------------------------------------------- 5.80s 2026-04-11 02:51:41.687104 | orchestrator | Create block LVs -------------------------------------------------------- 4.24s 2026-04-11 02:51:41.687111 | orchestrator | Add known partitions to the list of available block devices ------------- 2.17s 2026-04-11 02:51:41.687117 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.96s 2026-04-11 02:51:41.687124 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.81s 2026-04-11 02:51:41.687130 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.66s 2026-04-11 02:51:41.687137 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.66s 2026-04-11 02:51:41.687144 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.63s 2026-04-11 02:51:41.687155 | orchestrator | Add known links to the list of available block devices ------------------ 1.59s 2026-04-11 02:51:42.213129 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2026-04-11 02:51:42.213202 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB+WAL VG ----------------- 1.01s 2026-04-11 02:51:42.213210 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-04-11 02:51:42.213234 | orchestrator | Calculate size needed for LVs on ceph_db_devices ------------------------ 0.98s 2026-04-11 02:51:42.213240 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-04-11 02:51:42.213247 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-04-11 02:51:42.213253 | orchestrator | Print LVM report data --------------------------------------------------- 0.88s 2026-04-11 02:51:42.213259 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2026-04-11 02:51:42.213265 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.82s 2026-04-11 02:51:42.213271 | orchestrator | Get initial list of available block devices ----------------------------- 0.81s 2026-04-11 02:51:42.213277 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.80s 2026-04-11 02:51:55.283648 | orchestrator | 2026-04-11 02:51:55 | INFO  | Task 50bc42a1-4721-463f-8215-415bfabbff19 (facts) was prepared for execution. 2026-04-11 02:51:55.283749 | orchestrator | 2026-04-11 02:51:55 | INFO  | It takes a moment until task 50bc42a1-4721-463f-8215-415bfabbff19 (facts) has been started and output is visible here. 2026-04-11 02:52:10.478146 | orchestrator | 2026-04-11 02:52:10.478233 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-11 02:52:10.478263 | orchestrator | 2026-04-11 02:52:10.478268 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 02:52:10.478273 | orchestrator | Saturday 11 April 2026 02:52:00 +0000 (0:00:00.349) 0:00:00.349 ******** 2026-04-11 02:52:10.478278 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:10.478283 | orchestrator | ok: [testbed-manager] 2026-04-11 02:52:10.478287 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:10.478291 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:10.478295 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:10.478300 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:10.478304 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:10.478308 | orchestrator | 2026-04-11 02:52:10.478312 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 02:52:10.478317 | orchestrator | Saturday 11 April 2026 02:52:01 +0000 (0:00:01.303) 0:00:01.653 ******** 2026-04-11 02:52:10.478321 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:52:10.478326 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:10.478330 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:10.478334 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:10.478338 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:10.478342 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:10.478346 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:10.478351 | orchestrator | 2026-04-11 02:52:10.478355 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 02:52:10.478359 | orchestrator | 2026-04-11 02:52:10.478363 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 02:52:10.478367 | orchestrator | Saturday 11 April 2026 02:52:03 +0000 (0:00:01.505) 0:00:03.158 ******** 2026-04-11 02:52:10.478371 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:10.478375 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:10.478379 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:10.478383 | orchestrator | ok: [testbed-manager] 2026-04-11 02:52:10.478387 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:10.478391 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:10.478395 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:10.478398 | orchestrator | 2026-04-11 02:52:10.478402 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 02:52:10.478406 | orchestrator | 2026-04-11 02:52:10.478410 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 02:52:10.478414 | orchestrator | Saturday 11 April 2026 02:52:09 +0000 (0:00:06.020) 0:00:09.179 ******** 2026-04-11 02:52:10.478417 | orchestrator | skipping: [testbed-manager] 2026-04-11 02:52:10.478421 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:10.478425 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:10.478428 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:10.478432 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:10.478436 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:10.478440 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:10.478443 | orchestrator | 2026-04-11 02:52:10.478447 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 02:52:10.478451 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:52:10.478456 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:52:10.478460 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:52:10.478466 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:52:10.478473 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:52:10.478482 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:52:10.478485 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 02:52:10.478489 | orchestrator | 2026-04-11 02:52:10.478493 | orchestrator | 2026-04-11 02:52:10.478498 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 02:52:10.478517 | orchestrator | Saturday 11 April 2026 02:52:09 +0000 (0:00:00.637) 0:00:09.817 ******** 2026-04-11 02:52:10.478524 | orchestrator | =============================================================================== 2026-04-11 02:52:10.478530 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.02s 2026-04-11 02:52:10.478536 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.51s 2026-04-11 02:52:10.478541 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-04-11 02:52:10.478547 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2026-04-11 02:52:13.229185 | orchestrator | 2026-04-11 02:52:13 | INFO  | Task 4c659c18-ac4b-49ba-842c-03c1d89e715a (ceph) was prepared for execution. 2026-04-11 02:52:13.229309 | orchestrator | 2026-04-11 02:52:13 | INFO  | It takes a moment until task 4c659c18-ac4b-49ba-842c-03c1d89e715a (ceph) has been started and output is visible here. 2026-04-11 02:52:33.855674 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 02:52:33.855745 | orchestrator | 2.16.14 2026-04-11 02:52:33.855752 | orchestrator | 2026-04-11 02:52:33.855757 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-11 02:52:33.855762 | orchestrator | 2026-04-11 02:52:33.855767 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 02:52:33.855771 | orchestrator | Saturday 11 April 2026 02:52:19 +0000 (0:00:00.963) 0:00:00.963 ******** 2026-04-11 02:52:33.855776 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:52:33.855781 | orchestrator | 2026-04-11 02:52:33.855785 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 02:52:33.855789 | orchestrator | Saturday 11 April 2026 02:52:20 +0000 (0:00:01.381) 0:00:02.345 ******** 2026-04-11 02:52:33.855793 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:33.855797 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:33.855801 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:33.855804 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:33.855808 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:33.855812 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:33.855816 | orchestrator | 2026-04-11 02:52:33.855820 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 02:52:33.855824 | orchestrator | Saturday 11 April 2026 02:52:21 +0000 (0:00:01.372) 0:00:03.718 ******** 2026-04-11 02:52:33.855828 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:33.855832 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:33.855835 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:33.855839 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:33.855843 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:33.855847 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:33.855850 | orchestrator | 2026-04-11 02:52:33.855854 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 02:52:33.855858 | orchestrator | Saturday 11 April 2026 02:52:22 +0000 (0:00:00.921) 0:00:04.639 ******** 2026-04-11 02:52:33.855862 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:33.855896 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:33.855900 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:33.855903 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:33.855925 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:33.855929 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:33.855933 | orchestrator | 2026-04-11 02:52:33.855937 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 02:52:33.855941 | orchestrator | Saturday 11 April 2026 02:52:23 +0000 (0:00:01.079) 0:00:05.719 ******** 2026-04-11 02:52:33.855944 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:33.855948 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:33.855952 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:33.855956 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:33.855959 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:33.855963 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:33.855967 | orchestrator | 2026-04-11 02:52:33.855971 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 02:52:33.855974 | orchestrator | Saturday 11 April 2026 02:52:24 +0000 (0:00:00.942) 0:00:06.661 ******** 2026-04-11 02:52:33.855978 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:33.855982 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:33.855986 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:33.855989 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:33.855993 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:33.855997 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:33.856000 | orchestrator | 2026-04-11 02:52:33.856004 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 02:52:33.856008 | orchestrator | Saturday 11 April 2026 02:52:25 +0000 (0:00:00.705) 0:00:07.367 ******** 2026-04-11 02:52:33.856012 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:33.856015 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:33.856019 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:33.856023 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:33.856027 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:33.856030 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:33.856034 | orchestrator | 2026-04-11 02:52:33.856038 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 02:52:33.856042 | orchestrator | Saturday 11 April 2026 02:52:26 +0000 (0:00:00.962) 0:00:08.329 ******** 2026-04-11 02:52:33.856045 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:33.856050 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:33.856054 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:33.856057 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:33.856061 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:33.856065 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:33.856069 | orchestrator | 2026-04-11 02:52:33.856073 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 02:52:33.856076 | orchestrator | Saturday 11 April 2026 02:52:27 +0000 (0:00:00.744) 0:00:09.074 ******** 2026-04-11 02:52:33.856080 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:33.856084 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:33.856088 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:33.856091 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:33.856104 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:33.856108 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:33.856112 | orchestrator | 2026-04-11 02:52:33.856116 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 02:52:33.856119 | orchestrator | Saturday 11 April 2026 02:52:28 +0000 (0:00:00.896) 0:00:09.971 ******** 2026-04-11 02:52:33.856124 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 02:52:33.856128 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 02:52:33.856131 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 02:52:33.856135 | orchestrator | 2026-04-11 02:52:33.856139 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 02:52:33.856143 | orchestrator | Saturday 11 April 2026 02:52:28 +0000 (0:00:00.784) 0:00:10.756 ******** 2026-04-11 02:52:33.856151 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:33.856155 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:33.856158 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:33.856172 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:33.856176 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:33.856180 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:33.856184 | orchestrator | 2026-04-11 02:52:33.856188 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 02:52:33.856191 | orchestrator | Saturday 11 April 2026 02:52:29 +0000 (0:00:00.832) 0:00:11.588 ******** 2026-04-11 02:52:33.856195 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 02:52:33.856199 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 02:52:33.856203 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 02:52:33.856207 | orchestrator | 2026-04-11 02:52:33.856211 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 02:52:33.856214 | orchestrator | Saturday 11 April 2026 02:52:32 +0000 (0:00:02.567) 0:00:14.156 ******** 2026-04-11 02:52:33.856218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 02:52:33.856222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 02:52:33.856226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 02:52:33.856230 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:33.856234 | orchestrator | 2026-04-11 02:52:33.856238 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 02:52:33.856241 | orchestrator | Saturday 11 April 2026 02:52:32 +0000 (0:00:00.441) 0:00:14.598 ******** 2026-04-11 02:52:33.856247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 02:52:33.856254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 02:52:33.856259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 02:52:33.856263 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:33.856268 | orchestrator | 2026-04-11 02:52:33.856272 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 02:52:33.856277 | orchestrator | Saturday 11 April 2026 02:52:33 +0000 (0:00:00.656) 0:00:15.254 ******** 2026-04-11 02:52:33.856282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:33.856288 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:33.856293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:33.856302 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:33.856307 | orchestrator | 2026-04-11 02:52:33.856314 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 02:52:33.856318 | orchestrator | Saturday 11 April 2026 02:52:33 +0000 (0:00:00.173) 0:00:15.427 ******** 2026-04-11 02:52:33.856328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 02:52:30.846541', 'end': '2026-04-11 02:52:30.897580', 'delta': '0:00:00.051039', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 02:52:44.851975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 02:52:31.400762', 'end': '2026-04-11 02:52:31.440998', 'delta': '0:00:00.040236', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 02:52:44.852112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 02:52:31.931259', 'end': '2026-04-11 02:52:31.985822', 'delta': '0:00:00.054563', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 02:52:44.852144 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.852163 | orchestrator | 2026-04-11 02:52:44.852186 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 02:52:44.852209 | orchestrator | Saturday 11 April 2026 02:52:33 +0000 (0:00:00.211) 0:00:15.639 ******** 2026-04-11 02:52:44.852225 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:44.852242 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:44.852254 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:44.852277 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:44.852298 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:44.852313 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:44.852330 | orchestrator | 2026-04-11 02:52:44.852347 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 02:52:44.852359 | orchestrator | Saturday 11 April 2026 02:52:34 +0000 (0:00:00.919) 0:00:16.558 ******** 2026-04-11 02:52:44.852372 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 02:52:44.852386 | orchestrator | 2026-04-11 02:52:44.852403 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 02:52:44.852427 | orchestrator | Saturday 11 April 2026 02:52:35 +0000 (0:00:00.937) 0:00:17.496 ******** 2026-04-11 02:52:44.852467 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.852494 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.852509 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.852523 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.852539 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.852552 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.852569 | orchestrator | 2026-04-11 02:52:44.852591 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 02:52:44.852606 | orchestrator | Saturday 11 April 2026 02:52:36 +0000 (0:00:00.964) 0:00:18.460 ******** 2026-04-11 02:52:44.852619 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.852639 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.852654 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.852675 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.852688 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.852703 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.852733 | orchestrator | 2026-04-11 02:52:44.852748 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 02:52:44.852764 | orchestrator | Saturday 11 April 2026 02:52:38 +0000 (0:00:01.340) 0:00:19.801 ******** 2026-04-11 02:52:44.852777 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.852788 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.852803 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.852816 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.852832 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.852862 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.852876 | orchestrator | 2026-04-11 02:52:44.852987 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 02:52:44.853001 | orchestrator | Saturday 11 April 2026 02:52:38 +0000 (0:00:00.637) 0:00:20.438 ******** 2026-04-11 02:52:44.853014 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853027 | orchestrator | 2026-04-11 02:52:44.853045 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 02:52:44.853058 | orchestrator | Saturday 11 April 2026 02:52:38 +0000 (0:00:00.128) 0:00:20.567 ******** 2026-04-11 02:52:44.853070 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853084 | orchestrator | 2026-04-11 02:52:44.853095 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 02:52:44.853106 | orchestrator | Saturday 11 April 2026 02:52:39 +0000 (0:00:00.252) 0:00:20.819 ******** 2026-04-11 02:52:44.853123 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853144 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.853163 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.853177 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.853196 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.853213 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.853228 | orchestrator | 2026-04-11 02:52:44.853260 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 02:52:44.853273 | orchestrator | Saturday 11 April 2026 02:52:39 +0000 (0:00:00.874) 0:00:21.693 ******** 2026-04-11 02:52:44.853286 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853301 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.853317 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.853329 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.853340 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.853361 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.853383 | orchestrator | 2026-04-11 02:52:44.853395 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 02:52:44.853406 | orchestrator | Saturday 11 April 2026 02:52:40 +0000 (0:00:00.670) 0:00:22.364 ******** 2026-04-11 02:52:44.853419 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853437 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.853451 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.853479 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.853495 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.853511 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.853527 | orchestrator | 2026-04-11 02:52:44.853539 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 02:52:44.853550 | orchestrator | Saturday 11 April 2026 02:52:41 +0000 (0:00:00.906) 0:00:23.271 ******** 2026-04-11 02:52:44.853564 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853575 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.853586 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.853597 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.853608 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.853618 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.853629 | orchestrator | 2026-04-11 02:52:44.853640 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 02:52:44.853651 | orchestrator | Saturday 11 April 2026 02:52:42 +0000 (0:00:00.653) 0:00:23.924 ******** 2026-04-11 02:52:44.853662 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853674 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.853685 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.853698 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.853711 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.853723 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.853735 | orchestrator | 2026-04-11 02:52:44.853749 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 02:52:44.853761 | orchestrator | Saturday 11 April 2026 02:52:42 +0000 (0:00:00.865) 0:00:24.790 ******** 2026-04-11 02:52:44.853772 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853787 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.853800 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.853812 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.853825 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.853837 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.853850 | orchestrator | 2026-04-11 02:52:44.853863 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 02:52:44.853904 | orchestrator | Saturday 11 April 2026 02:52:43 +0000 (0:00:00.713) 0:00:25.503 ******** 2026-04-11 02:52:44.853919 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:44.853931 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:44.853944 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:44.853958 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:44.853971 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:44.853984 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:44.853993 | orchestrator | 2026-04-11 02:52:44.854001 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 02:52:44.854009 | orchestrator | Saturday 11 April 2026 02:52:44 +0000 (0:00:00.909) 0:00:26.412 ******** 2026-04-11 02:52:44.854076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.854096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.854125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.946651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.946780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.946805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.946826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.946846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.946866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.946915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.946960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.947053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:44.947079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.947102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:44.947132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:44.947175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.130504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.130590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.130602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.130612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.130621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.130630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.130670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.130678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.130700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.130711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.130727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.130754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.130776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.262560 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:45.262668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.262689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.262993 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:45.263008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.263033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.263062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.263088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.492068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.492234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.492300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.492474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.492487 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:45.492500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.492546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.749388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.749408 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:45.749425 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:45.749442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.749583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 02:52:45.985448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.985546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 02:52:45.985564 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:45.985577 | orchestrator | 2026-04-11 02:52:45.985590 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 02:52:45.985603 | orchestrator | Saturday 11 April 2026 02:52:45 +0000 (0:00:01.117) 0:00:27.530 ******** 2026-04-11 02:52:45.985616 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:45.985672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:45.985685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:45.985698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:45.985718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:45.985731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:45.985743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:45.985782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:45.985798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.095822 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096025 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096104 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096134 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.096147 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116302 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116366 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116384 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.116485 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640085 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:46.640287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640321 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640360 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640374 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640386 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640422 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640444 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640478 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:46.640492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640505 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.640557 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760495 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760505 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760524 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760530 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760547 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760585 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.760609 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940053 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940172 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940214 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940258 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-30-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940299 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:46.940314 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940328 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940339 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940351 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940363 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940381 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:46.940407 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189230 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189326 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189394 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189405 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:47.189413 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:47.189430 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189435 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189439 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189443 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189447 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189458 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189462 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:47.189471 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:55.515872 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:55.516013 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 02:52:55.516024 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:55.516030 | orchestrator | 2026-04-11 02:52:55.516035 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 02:52:55.516040 | orchestrator | Saturday 11 April 2026 02:52:47 +0000 (0:00:01.437) 0:00:28.967 ******** 2026-04-11 02:52:55.516044 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:55.516049 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:55.516053 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:55.516065 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:55.516111 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:55.516117 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:55.516121 | orchestrator | 2026-04-11 02:52:55.516125 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 02:52:55.516129 | orchestrator | Saturday 11 April 2026 02:52:48 +0000 (0:00:01.040) 0:00:30.008 ******** 2026-04-11 02:52:55.516133 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:52:55.516136 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:52:55.516140 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:52:55.516144 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:52:55.516148 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:52:55.516152 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:52:55.516155 | orchestrator | 2026-04-11 02:52:55.516159 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 02:52:55.516163 | orchestrator | Saturday 11 April 2026 02:52:49 +0000 (0:00:00.932) 0:00:30.941 ******** 2026-04-11 02:52:55.516167 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:55.516171 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:55.516175 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:55.516189 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:55.516193 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:55.516197 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:55.516201 | orchestrator | 2026-04-11 02:52:55.516204 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 02:52:55.516209 | orchestrator | Saturday 11 April 2026 02:52:49 +0000 (0:00:00.675) 0:00:31.616 ******** 2026-04-11 02:52:55.516212 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:55.516216 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:55.516220 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:55.516224 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:55.516227 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:55.516231 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:55.516235 | orchestrator | 2026-04-11 02:52:55.516239 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 02:52:55.516242 | orchestrator | Saturday 11 April 2026 02:52:50 +0000 (0:00:00.967) 0:00:32.584 ******** 2026-04-11 02:52:55.516246 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:55.516250 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:55.516254 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:55.516263 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:55.516267 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:55.516271 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:55.516275 | orchestrator | 2026-04-11 02:52:55.516278 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 02:52:55.516282 | orchestrator | Saturday 11 April 2026 02:52:51 +0000 (0:00:00.709) 0:00:33.293 ******** 2026-04-11 02:52:55.516286 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:55.516290 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:55.516294 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:55.516297 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:55.516301 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:55.516305 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:55.516309 | orchestrator | 2026-04-11 02:52:55.516313 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 02:52:55.516319 | orchestrator | Saturday 11 April 2026 02:52:52 +0000 (0:00:00.975) 0:00:34.268 ******** 2026-04-11 02:52:55.516325 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-11 02:52:55.516331 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-11 02:52:55.516340 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-11 02:52:55.516348 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-11 02:52:55.516354 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-11 02:52:55.516359 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-11 02:52:55.516365 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 02:52:55.516370 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-11 02:52:55.516375 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-11 02:52:55.516382 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-11 02:52:55.516388 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-11 02:52:55.516393 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 02:52:55.516399 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 02:52:55.516406 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-11 02:52:55.516412 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 02:52:55.516418 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-11 02:52:55.516424 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-11 02:52:55.516435 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 02:52:55.516441 | orchestrator | 2026-04-11 02:52:55.516447 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 02:52:55.516454 | orchestrator | Saturday 11 April 2026 02:52:54 +0000 (0:00:01.968) 0:00:36.237 ******** 2026-04-11 02:52:55.516461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 02:52:55.516468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 02:52:55.516474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 02:52:55.516480 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:52:55.516486 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 02:52:55.516492 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 02:52:55.516497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 02:52:55.516504 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:52:55.516509 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 02:52:55.516516 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 02:52:55.516522 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 02:52:55.516529 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:52:55.516535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 02:52:55.516542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 02:52:55.516554 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 02:52:55.516560 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:52:55.516567 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 02:52:55.516574 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 02:52:55.516581 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 02:52:55.516588 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:52:55.516592 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 02:52:55.516597 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 02:52:55.516601 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 02:52:55.516605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:52:55.516610 | orchestrator | 2026-04-11 02:52:55.516614 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 02:52:55.516623 | orchestrator | Saturday 11 April 2026 02:52:55 +0000 (0:00:01.061) 0:00:37.299 ******** 2026-04-11 02:53:15.780338 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:15.780443 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:15.780460 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:15.780473 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:53:15.780485 | orchestrator | 2026-04-11 02:53:15.780497 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 02:53:15.780510 | orchestrator | Saturday 11 April 2026 02:52:56 +0000 (0:00:01.203) 0:00:38.502 ******** 2026-04-11 02:53:15.780522 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.780533 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:15.780544 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:15.780555 | orchestrator | 2026-04-11 02:53:15.780566 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 02:53:15.780578 | orchestrator | Saturday 11 April 2026 02:52:57 +0000 (0:00:00.395) 0:00:38.898 ******** 2026-04-11 02:53:15.780588 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.780599 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:15.780609 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:15.780620 | orchestrator | 2026-04-11 02:53:15.780631 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 02:53:15.780642 | orchestrator | Saturday 11 April 2026 02:52:57 +0000 (0:00:00.363) 0:00:39.261 ******** 2026-04-11 02:53:15.780652 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.780663 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:15.780674 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:15.780685 | orchestrator | 2026-04-11 02:53:15.780695 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 02:53:15.780706 | orchestrator | Saturday 11 April 2026 02:52:57 +0000 (0:00:00.350) 0:00:39.612 ******** 2026-04-11 02:53:15.780717 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:15.780729 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:15.780740 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:15.780750 | orchestrator | 2026-04-11 02:53:15.780761 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 02:53:15.780771 | orchestrator | Saturday 11 April 2026 02:52:58 +0000 (0:00:00.827) 0:00:40.439 ******** 2026-04-11 02:53:15.780782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:53:15.780793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:53:15.780804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:53:15.780814 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.780825 | orchestrator | 2026-04-11 02:53:15.780836 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 02:53:15.780895 | orchestrator | Saturday 11 April 2026 02:52:59 +0000 (0:00:00.470) 0:00:40.910 ******** 2026-04-11 02:53:15.780968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:53:15.780982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:53:15.780995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:53:15.781008 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.781020 | orchestrator | 2026-04-11 02:53:15.781033 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 02:53:15.781045 | orchestrator | Saturday 11 April 2026 02:52:59 +0000 (0:00:00.426) 0:00:41.336 ******** 2026-04-11 02:53:15.781072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:53:15.781085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:53:15.781097 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:53:15.781110 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.781128 | orchestrator | 2026-04-11 02:53:15.781146 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 02:53:15.781163 | orchestrator | Saturday 11 April 2026 02:52:59 +0000 (0:00:00.428) 0:00:41.764 ******** 2026-04-11 02:53:15.781181 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:15.781198 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:15.781215 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:15.781229 | orchestrator | 2026-04-11 02:53:15.781246 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 02:53:15.781263 | orchestrator | Saturday 11 April 2026 02:53:00 +0000 (0:00:00.396) 0:00:42.160 ******** 2026-04-11 02:53:15.781281 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 02:53:15.781299 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 02:53:15.781313 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 02:53:15.781329 | orchestrator | 2026-04-11 02:53:15.781345 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 02:53:15.781361 | orchestrator | Saturday 11 April 2026 02:53:01 +0000 (0:00:01.110) 0:00:43.271 ******** 2026-04-11 02:53:15.781379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 02:53:15.781397 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 02:53:15.781414 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 02:53:15.781432 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 02:53:15.781450 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 02:53:15.781467 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 02:53:15.781483 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 02:53:15.781500 | orchestrator | 2026-04-11 02:53:15.781517 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 02:53:15.781533 | orchestrator | Saturday 11 April 2026 02:53:02 +0000 (0:00:00.977) 0:00:44.248 ******** 2026-04-11 02:53:15.781576 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 02:53:15.781594 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 02:53:15.781613 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 02:53:15.781631 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 02:53:15.781649 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 02:53:15.781667 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 02:53:15.781686 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 02:53:15.781704 | orchestrator | 2026-04-11 02:53:15.781741 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 02:53:15.781759 | orchestrator | Saturday 11 April 2026 02:53:04 +0000 (0:00:02.165) 0:00:46.414 ******** 2026-04-11 02:53:15.781779 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:53:15.781799 | orchestrator | 2026-04-11 02:53:15.781817 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 02:53:15.781835 | orchestrator | Saturday 11 April 2026 02:53:06 +0000 (0:00:01.693) 0:00:48.107 ******** 2026-04-11 02:53:15.781854 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:53:15.781872 | orchestrator | 2026-04-11 02:53:15.781890 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 02:53:15.781908 | orchestrator | Saturday 11 April 2026 02:53:07 +0000 (0:00:01.557) 0:00:49.665 ******** 2026-04-11 02:53:15.781954 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.781972 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:15.781991 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:15.782009 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:53:15.782105 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:53:15.782123 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:53:15.782141 | orchestrator | 2026-04-11 02:53:15.782158 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 02:53:15.782175 | orchestrator | Saturday 11 April 2026 02:53:09 +0000 (0:00:01.363) 0:00:51.029 ******** 2026-04-11 02:53:15.782192 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:15.782211 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:15.782232 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:15.782253 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:15.782271 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:15.782290 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:15.782308 | orchestrator | 2026-04-11 02:53:15.782327 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 02:53:15.782345 | orchestrator | Saturday 11 April 2026 02:53:09 +0000 (0:00:00.762) 0:00:51.791 ******** 2026-04-11 02:53:15.782364 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:15.782382 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:15.782406 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:15.782425 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:15.782444 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:15.782475 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:15.782494 | orchestrator | 2026-04-11 02:53:15.782516 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 02:53:15.782534 | orchestrator | Saturday 11 April 2026 02:53:10 +0000 (0:00:00.968) 0:00:52.760 ******** 2026-04-11 02:53:15.782550 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:15.782566 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:15.782583 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:15.782602 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:15.782620 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:15.782638 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:15.782658 | orchestrator | 2026-04-11 02:53:15.782677 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 02:53:15.782696 | orchestrator | Saturday 11 April 2026 02:53:11 +0000 (0:00:00.781) 0:00:53.541 ******** 2026-04-11 02:53:15.782715 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.782735 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:15.782755 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:15.782774 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:53:15.782794 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:53:15.782813 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:53:15.782833 | orchestrator | 2026-04-11 02:53:15.782867 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 02:53:15.782884 | orchestrator | Saturday 11 April 2026 02:53:13 +0000 (0:00:01.357) 0:00:54.899 ******** 2026-04-11 02:53:15.782902 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.782947 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:15.782967 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:15.782985 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:15.783082 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:15.783104 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:15.783124 | orchestrator | 2026-04-11 02:53:15.783145 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 02:53:15.783165 | orchestrator | Saturday 11 April 2026 02:53:13 +0000 (0:00:00.665) 0:00:55.564 ******** 2026-04-11 02:53:15.783185 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:15.783205 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:15.783225 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:15.783245 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:15.783264 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:15.783285 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:15.783306 | orchestrator | 2026-04-11 02:53:15.783327 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 02:53:15.783349 | orchestrator | Saturday 11 April 2026 02:53:14 +0000 (0:00:00.939) 0:00:56.504 ******** 2026-04-11 02:53:15.783370 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:15.783408 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:36.165387 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:36.165466 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:53:36.165473 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:53:36.165478 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:53:36.165483 | orchestrator | 2026-04-11 02:53:36.165488 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 02:53:36.165495 | orchestrator | Saturday 11 April 2026 02:53:15 +0000 (0:00:01.060) 0:00:57.565 ******** 2026-04-11 02:53:36.165499 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:36.165504 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:36.165508 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:36.165513 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:53:36.165517 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:53:36.165521 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:53:36.165526 | orchestrator | 2026-04-11 02:53:36.165530 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 02:53:36.165535 | orchestrator | Saturday 11 April 2026 02:53:17 +0000 (0:00:01.430) 0:00:58.995 ******** 2026-04-11 02:53:36.165540 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:36.165549 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:36.165556 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:36.165565 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.165572 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.165580 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.165588 | orchestrator | 2026-04-11 02:53:36.165596 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 02:53:36.165602 | orchestrator | Saturday 11 April 2026 02:53:17 +0000 (0:00:00.655) 0:00:59.651 ******** 2026-04-11 02:53:36.165607 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:36.165611 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:36.165616 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:36.165620 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:53:36.165625 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:53:36.165630 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:53:36.165637 | orchestrator | 2026-04-11 02:53:36.165644 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 02:53:36.165652 | orchestrator | Saturday 11 April 2026 02:53:18 +0000 (0:00:00.946) 0:01:00.598 ******** 2026-04-11 02:53:36.165662 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:36.165692 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:36.165700 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:36.165706 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.165713 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.165719 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.165726 | orchestrator | 2026-04-11 02:53:36.165733 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 02:53:36.165740 | orchestrator | Saturday 11 April 2026 02:53:19 +0000 (0:00:00.640) 0:01:01.238 ******** 2026-04-11 02:53:36.165745 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:36.165752 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:36.165759 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:36.165766 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.165772 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.165779 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.165785 | orchestrator | 2026-04-11 02:53:36.165793 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 02:53:36.165800 | orchestrator | Saturday 11 April 2026 02:53:20 +0000 (0:00:00.948) 0:01:02.187 ******** 2026-04-11 02:53:36.165807 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:36.165813 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:36.165820 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:36.165826 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.165833 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.165854 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.165861 | orchestrator | 2026-04-11 02:53:36.165868 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 02:53:36.165876 | orchestrator | Saturday 11 April 2026 02:53:21 +0000 (0:00:00.648) 0:01:02.835 ******** 2026-04-11 02:53:36.165884 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:36.165889 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:36.165893 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:36.165898 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.165902 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.165906 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.165910 | orchestrator | 2026-04-11 02:53:36.165915 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 02:53:36.165919 | orchestrator | Saturday 11 April 2026 02:53:21 +0000 (0:00:00.875) 0:01:03.711 ******** 2026-04-11 02:53:36.165924 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:36.165928 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:36.165932 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:36.165937 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.165978 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.165983 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.165988 | orchestrator | 2026-04-11 02:53:36.165993 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 02:53:36.165998 | orchestrator | Saturday 11 April 2026 02:53:22 +0000 (0:00:00.675) 0:01:04.386 ******** 2026-04-11 02:53:36.166003 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:36.166008 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:36.166013 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:36.166062 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:53:36.166070 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:53:36.166079 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:53:36.166087 | orchestrator | 2026-04-11 02:53:36.166095 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 02:53:36.166100 | orchestrator | Saturday 11 April 2026 02:53:23 +0000 (0:00:00.944) 0:01:05.331 ******** 2026-04-11 02:53:36.166104 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:36.166108 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:36.166113 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:36.166117 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:53:36.166122 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:53:36.166126 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:53:36.166138 | orchestrator | 2026-04-11 02:53:36.166142 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 02:53:36.166147 | orchestrator | Saturday 11 April 2026 02:53:24 +0000 (0:00:00.693) 0:01:06.024 ******** 2026-04-11 02:53:36.166151 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:53:36.166171 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:53:36.166178 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:53:36.166186 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:53:36.166193 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:53:36.166199 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:53:36.166208 | orchestrator | 2026-04-11 02:53:36.166212 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 02:53:36.166217 | orchestrator | Saturday 11 April 2026 02:53:25 +0000 (0:00:01.443) 0:01:07.468 ******** 2026-04-11 02:53:36.166221 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:53:36.166226 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:53:36.166230 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:53:36.166234 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:53:36.166239 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:53:36.166243 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:53:36.166247 | orchestrator | 2026-04-11 02:53:36.166252 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 02:53:36.166256 | orchestrator | Saturday 11 April 2026 02:53:27 +0000 (0:00:01.860) 0:01:09.328 ******** 2026-04-11 02:53:36.166260 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:53:36.166265 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:53:36.166269 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:53:36.166273 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:53:36.166277 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:53:36.166282 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:53:36.166286 | orchestrator | 2026-04-11 02:53:36.166290 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 02:53:36.166295 | orchestrator | Saturday 11 April 2026 02:53:29 +0000 (0:00:02.331) 0:01:11.659 ******** 2026-04-11 02:53:36.166300 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:53:36.166307 | orchestrator | 2026-04-11 02:53:36.166313 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 02:53:36.166320 | orchestrator | Saturday 11 April 2026 02:53:31 +0000 (0:00:01.343) 0:01:13.003 ******** 2026-04-11 02:53:36.166329 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:36.166340 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:36.166346 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:36.166352 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.166359 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.166366 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.166372 | orchestrator | 2026-04-11 02:53:36.166380 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 02:53:36.166386 | orchestrator | Saturday 11 April 2026 02:53:31 +0000 (0:00:00.685) 0:01:13.689 ******** 2026-04-11 02:53:36.166393 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:36.166399 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:36.166405 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:36.166412 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.166418 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.166424 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.166431 | orchestrator | 2026-04-11 02:53:36.166437 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 02:53:36.166444 | orchestrator | Saturday 11 April 2026 02:53:32 +0000 (0:00:00.977) 0:01:14.666 ******** 2026-04-11 02:53:36.166450 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 02:53:36.166463 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 02:53:36.166477 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 02:53:36.166484 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 02:53:36.166490 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 02:53:36.166497 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 02:53:36.166505 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 02:53:36.166512 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 02:53:36.166518 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 02:53:36.166525 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 02:53:36.166532 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 02:53:36.166538 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 02:53:36.166545 | orchestrator | 2026-04-11 02:53:36.166552 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 02:53:36.166559 | orchestrator | Saturday 11 April 2026 02:53:34 +0000 (0:00:01.370) 0:01:16.037 ******** 2026-04-11 02:53:36.166566 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:53:36.166573 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:53:36.166580 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:53:36.166587 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:53:36.166594 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:53:36.166601 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:53:36.166608 | orchestrator | 2026-04-11 02:53:36.166615 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 02:53:36.166623 | orchestrator | Saturday 11 April 2026 02:53:35 +0000 (0:00:01.239) 0:01:17.276 ******** 2026-04-11 02:53:36.166630 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:53:36.166638 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:53:36.166643 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:53:36.166647 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:53:36.166652 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:53:36.166656 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:53:36.166660 | orchestrator | 2026-04-11 02:53:36.166672 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 02:54:47.535708 | orchestrator | Saturday 11 April 2026 02:53:36 +0000 (0:00:00.671) 0:01:17.948 ******** 2026-04-11 02:54:47.535839 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.535861 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.535872 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.535882 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.535892 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.535902 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.535912 | orchestrator | 2026-04-11 02:54:47.535922 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 02:54:47.535933 | orchestrator | Saturday 11 April 2026 02:53:37 +0000 (0:00:00.937) 0:01:18.885 ******** 2026-04-11 02:54:47.535943 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.535953 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.535962 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.535972 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.535982 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.535991 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.536001 | orchestrator | 2026-04-11 02:54:47.536011 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 02:54:47.536066 | orchestrator | Saturday 11 April 2026 02:53:37 +0000 (0:00:00.731) 0:01:19.616 ******** 2026-04-11 02:54:47.536102 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:54:47.536114 | orchestrator | 2026-04-11 02:54:47.536124 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 02:54:47.536134 | orchestrator | Saturday 11 April 2026 02:53:39 +0000 (0:00:01.376) 0:01:20.992 ******** 2026-04-11 02:54:47.536144 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:54:47.536154 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:54:47.536164 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:54:47.536173 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:54:47.536183 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:54:47.536192 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:54:47.536202 | orchestrator | 2026-04-11 02:54:47.536212 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 02:54:47.536223 | orchestrator | Saturday 11 April 2026 02:54:33 +0000 (0:00:54.046) 0:02:15.039 ******** 2026-04-11 02:54:47.536234 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 02:54:47.536251 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 02:54:47.536276 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 02:54:47.536294 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.536311 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 02:54:47.536327 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 02:54:47.536342 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 02:54:47.536358 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.536374 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 02:54:47.536391 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 02:54:47.536425 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 02:54:47.536445 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.536463 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 02:54:47.536479 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 02:54:47.536495 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 02:54:47.536507 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.536519 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 02:54:47.536529 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 02:54:47.536541 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 02:54:47.536551 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.536563 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 02:54:47.536576 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 02:54:47.536592 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 02:54:47.536615 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.536634 | orchestrator | 2026-04-11 02:54:47.536650 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 02:54:47.536666 | orchestrator | Saturday 11 April 2026 02:54:34 +0000 (0:00:00.771) 0:02:15.811 ******** 2026-04-11 02:54:47.536680 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.536693 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.536707 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.536721 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.536736 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.536769 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.536785 | orchestrator | 2026-04-11 02:54:47.536801 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 02:54:47.536816 | orchestrator | Saturday 11 April 2026 02:54:34 +0000 (0:00:00.942) 0:02:16.753 ******** 2026-04-11 02:54:47.536833 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.536849 | orchestrator | 2026-04-11 02:54:47.536866 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 02:54:47.536883 | orchestrator | Saturday 11 April 2026 02:54:35 +0000 (0:00:00.156) 0:02:16.909 ******** 2026-04-11 02:54:47.536899 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.536941 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.536957 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.536973 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.536988 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.537003 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.537042 | orchestrator | 2026-04-11 02:54:47.537059 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 02:54:47.537075 | orchestrator | Saturday 11 April 2026 02:54:35 +0000 (0:00:00.719) 0:02:17.629 ******** 2026-04-11 02:54:47.537093 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.537108 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.537125 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.537141 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.537158 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.537175 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.537191 | orchestrator | 2026-04-11 02:54:47.537207 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 02:54:47.537221 | orchestrator | Saturday 11 April 2026 02:54:36 +0000 (0:00:01.011) 0:02:18.640 ******** 2026-04-11 02:54:47.537231 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.537240 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.537250 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.537260 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.537270 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.537279 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.537289 | orchestrator | 2026-04-11 02:54:47.537299 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 02:54:47.537309 | orchestrator | Saturday 11 April 2026 02:54:37 +0000 (0:00:00.708) 0:02:19.348 ******** 2026-04-11 02:54:47.537319 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:54:47.537329 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:54:47.537338 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:54:47.537348 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:54:47.537358 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:54:47.537368 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:54:47.537377 | orchestrator | 2026-04-11 02:54:47.537387 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 02:54:47.537397 | orchestrator | Saturday 11 April 2026 02:54:40 +0000 (0:00:03.428) 0:02:22.776 ******** 2026-04-11 02:54:47.537407 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:54:47.537416 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:54:47.537426 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:54:47.537435 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:54:47.537445 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:54:47.537454 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:54:47.537464 | orchestrator | 2026-04-11 02:54:47.537474 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 02:54:47.537483 | orchestrator | Saturday 11 April 2026 02:54:41 +0000 (0:00:00.669) 0:02:23.446 ******** 2026-04-11 02:54:47.537494 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:54:47.537505 | orchestrator | 2026-04-11 02:54:47.537515 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 02:54:47.537536 | orchestrator | Saturday 11 April 2026 02:54:43 +0000 (0:00:01.471) 0:02:24.918 ******** 2026-04-11 02:54:47.537545 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.537555 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.537565 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.537606 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.537623 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.537638 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.537654 | orchestrator | 2026-04-11 02:54:47.537671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 02:54:47.537688 | orchestrator | Saturday 11 April 2026 02:54:44 +0000 (0:00:00.940) 0:02:25.858 ******** 2026-04-11 02:54:47.537705 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.537722 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.537734 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.537744 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.537753 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.537762 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.537772 | orchestrator | 2026-04-11 02:54:47.537781 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 02:54:47.537791 | orchestrator | Saturday 11 April 2026 02:54:44 +0000 (0:00:00.727) 0:02:26.585 ******** 2026-04-11 02:54:47.537800 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.537810 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.537819 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.537829 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.537838 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.537848 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.537857 | orchestrator | 2026-04-11 02:54:47.537867 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 02:54:47.537877 | orchestrator | Saturday 11 April 2026 02:54:45 +0000 (0:00:01.043) 0:02:27.629 ******** 2026-04-11 02:54:47.537886 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.537895 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.537905 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.537914 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.537924 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.537933 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.537942 | orchestrator | 2026-04-11 02:54:47.537952 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 02:54:47.537962 | orchestrator | Saturday 11 April 2026 02:54:46 +0000 (0:00:00.687) 0:02:28.316 ******** 2026-04-11 02:54:47.537971 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:47.537981 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:47.537990 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:47.538000 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:47.538009 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:47.538120 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:47.538131 | orchestrator | 2026-04-11 02:54:47.538141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 02:54:47.538166 | orchestrator | Saturday 11 April 2026 02:54:47 +0000 (0:00:00.994) 0:02:29.311 ******** 2026-04-11 02:54:59.554412 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:59.554548 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:59.554566 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:59.554575 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:59.554584 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:59.554592 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:59.554602 | orchestrator | 2026-04-11 02:54:59.554620 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 02:54:59.554631 | orchestrator | Saturday 11 April 2026 02:54:48 +0000 (0:00:00.674) 0:02:29.985 ******** 2026-04-11 02:54:59.554666 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:59.554677 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:59.554687 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:59.554697 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:59.554706 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:59.554715 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:59.554726 | orchestrator | 2026-04-11 02:54:59.554735 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 02:54:59.554744 | orchestrator | Saturday 11 April 2026 02:54:49 +0000 (0:00:00.931) 0:02:30.916 ******** 2026-04-11 02:54:59.554753 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:54:59.554762 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:54:59.554771 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:54:59.554781 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:54:59.554790 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:54:59.554800 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:54:59.554809 | orchestrator | 2026-04-11 02:54:59.554818 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 02:54:59.554827 | orchestrator | Saturday 11 April 2026 02:54:49 +0000 (0:00:00.732) 0:02:31.649 ******** 2026-04-11 02:54:59.554837 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:54:59.554846 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:54:59.554852 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:54:59.554857 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:54:59.554862 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:54:59.554868 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:54:59.554873 | orchestrator | 2026-04-11 02:54:59.554879 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 02:54:59.554884 | orchestrator | Saturday 11 April 2026 02:54:51 +0000 (0:00:01.502) 0:02:33.151 ******** 2026-04-11 02:54:59.554891 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:54:59.554897 | orchestrator | 2026-04-11 02:54:59.554903 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 02:54:59.554910 | orchestrator | Saturday 11 April 2026 02:54:52 +0000 (0:00:01.509) 0:02:34.661 ******** 2026-04-11 02:54:59.554916 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-11 02:54:59.554923 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-11 02:54:59.554930 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-11 02:54:59.554936 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-11 02:54:59.554942 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-11 02:54:59.554948 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-11 02:54:59.554967 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-11 02:54:59.554974 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-11 02:54:59.554980 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-11 02:54:59.554986 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-11 02:54:59.554992 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-11 02:54:59.554998 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-11 02:54:59.555005 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-11 02:54:59.555011 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-11 02:54:59.555017 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-11 02:54:59.555022 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-11 02:54:59.555074 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-11 02:54:59.555081 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-11 02:54:59.555087 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-11 02:54:59.555102 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-11 02:54:59.555109 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-11 02:54:59.555115 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-11 02:54:59.555122 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-11 02:54:59.555129 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-11 02:54:59.555134 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-11 02:54:59.555140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-11 02:54:59.555148 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-11 02:54:59.555156 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-11 02:54:59.555164 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-11 02:54:59.555172 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-11 02:54:59.555179 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-11 02:54:59.555187 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-11 02:54:59.555195 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-11 02:54:59.555204 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-11 02:54:59.555211 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-11 02:54:59.555239 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-11 02:54:59.555249 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-11 02:54:59.555257 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-11 02:54:59.555265 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-11 02:54:59.555273 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-11 02:54:59.555282 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 02:54:59.555295 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-11 02:54:59.555310 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-11 02:54:59.555318 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-11 02:54:59.555327 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-11 02:54:59.555335 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-11 02:54:59.555344 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 02:54:59.555352 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 02:54:59.555360 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-11 02:54:59.555368 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 02:54:59.555378 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-11 02:54:59.555386 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-11 02:54:59.555395 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 02:54:59.555404 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 02:54:59.555412 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 02:54:59.555422 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 02:54:59.555428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 02:54:59.555433 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 02:54:59.555438 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 02:54:59.555443 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 02:54:59.555449 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 02:54:59.555462 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 02:54:59.555467 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 02:54:59.555473 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 02:54:59.555478 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 02:54:59.555483 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 02:54:59.555495 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 02:54:59.555500 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 02:54:59.555506 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 02:54:59.555511 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 02:54:59.555516 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 02:54:59.555522 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 02:54:59.555527 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 02:54:59.555533 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 02:54:59.555538 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 02:54:59.555543 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 02:54:59.555549 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 02:54:59.555554 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 02:54:59.555560 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 02:54:59.555565 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 02:54:59.555570 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 02:54:59.555576 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-11 02:54:59.555581 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-11 02:54:59.555587 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 02:54:59.555592 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-11 02:54:59.555597 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 02:54:59.555603 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 02:54:59.555608 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-11 02:54:59.555613 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-11 02:54:59.555618 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-11 02:54:59.555624 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-11 02:54:59.555637 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-11 02:55:15.510786 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-11 02:55:15.510867 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-11 02:55:15.510873 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-11 02:55:15.510878 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-11 02:55:15.510882 | orchestrator | 2026-04-11 02:55:15.510887 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 02:55:15.510893 | orchestrator | Saturday 11 April 2026 02:54:59 +0000 (0:00:06.631) 0:02:41.292 ******** 2026-04-11 02:55:15.510897 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.510902 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.510906 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.510910 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:55:15.510934 | orchestrator | 2026-04-11 02:55:15.510938 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 02:55:15.510942 | orchestrator | Saturday 11 April 2026 02:55:00 +0000 (0:00:01.135) 0:02:42.428 ******** 2026-04-11 02:55:15.510946 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.510952 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.510956 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.510960 | orchestrator | 2026-04-11 02:55:15.510964 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 02:55:15.510968 | orchestrator | Saturday 11 April 2026 02:55:01 +0000 (0:00:00.747) 0:02:43.176 ******** 2026-04-11 02:55:15.510971 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.510975 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.510979 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.510983 | orchestrator | 2026-04-11 02:55:15.510986 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 02:55:15.510990 | orchestrator | Saturday 11 April 2026 02:55:02 +0000 (0:00:01.197) 0:02:44.373 ******** 2026-04-11 02:55:15.510994 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:15.510998 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:15.511002 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:15.511005 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511009 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511013 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511017 | orchestrator | 2026-04-11 02:55:15.511020 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 02:55:15.511034 | orchestrator | Saturday 11 April 2026 02:55:03 +0000 (0:00:00.953) 0:02:45.327 ******** 2026-04-11 02:55:15.511038 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:15.511061 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:15.511065 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:15.511069 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511073 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511076 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511080 | orchestrator | 2026-04-11 02:55:15.511084 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 02:55:15.511087 | orchestrator | Saturday 11 April 2026 02:55:04 +0000 (0:00:00.648) 0:02:45.976 ******** 2026-04-11 02:55:15.511091 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:15.511095 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:15.511099 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:15.511103 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511107 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511111 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511114 | orchestrator | 2026-04-11 02:55:15.511118 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 02:55:15.511122 | orchestrator | Saturday 11 April 2026 02:55:05 +0000 (0:00:00.983) 0:02:46.959 ******** 2026-04-11 02:55:15.511126 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:15.511129 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:15.511133 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:15.511137 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511141 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511149 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511153 | orchestrator | 2026-04-11 02:55:15.511157 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 02:55:15.511161 | orchestrator | Saturday 11 April 2026 02:55:05 +0000 (0:00:00.684) 0:02:47.644 ******** 2026-04-11 02:55:15.511164 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:15.511168 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:15.511172 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:15.511176 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511179 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511183 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511187 | orchestrator | 2026-04-11 02:55:15.511191 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 02:55:15.511195 | orchestrator | Saturday 11 April 2026 02:55:06 +0000 (0:00:00.945) 0:02:48.590 ******** 2026-04-11 02:55:15.511199 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:15.511202 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:15.511206 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:15.511210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511224 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511228 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511231 | orchestrator | 2026-04-11 02:55:15.511235 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 02:55:15.511239 | orchestrator | Saturday 11 April 2026 02:55:07 +0000 (0:00:00.644) 0:02:49.234 ******** 2026-04-11 02:55:15.511243 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:15.511247 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:15.511250 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:15.511254 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511258 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511261 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511265 | orchestrator | 2026-04-11 02:55:15.511269 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 02:55:15.511273 | orchestrator | Saturday 11 April 2026 02:55:08 +0000 (0:00:00.959) 0:02:50.193 ******** 2026-04-11 02:55:15.511277 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:15.511280 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:15.511284 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:15.511288 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511292 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511295 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511299 | orchestrator | 2026-04-11 02:55:15.511303 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 02:55:15.511307 | orchestrator | Saturday 11 April 2026 02:55:09 +0000 (0:00:00.665) 0:02:50.859 ******** 2026-04-11 02:55:15.511310 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511314 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511318 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511322 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:15.511326 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:15.511329 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:15.511333 | orchestrator | 2026-04-11 02:55:15.511337 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 02:55:15.511341 | orchestrator | Saturday 11 April 2026 02:55:11 +0000 (0:00:02.845) 0:02:53.705 ******** 2026-04-11 02:55:15.511345 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:15.511350 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:15.511354 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:15.511359 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511363 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511367 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511372 | orchestrator | 2026-04-11 02:55:15.511376 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 02:55:15.511385 | orchestrator | Saturday 11 April 2026 02:55:12 +0000 (0:00:00.703) 0:02:54.408 ******** 2026-04-11 02:55:15.511390 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:15.511394 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:15.511399 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:15.511403 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511407 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511411 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511416 | orchestrator | 2026-04-11 02:55:15.511420 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 02:55:15.511424 | orchestrator | Saturday 11 April 2026 02:55:13 +0000 (0:00:00.996) 0:02:55.405 ******** 2026-04-11 02:55:15.511429 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:15.511433 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:15.511440 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:15.511445 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511449 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511454 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511458 | orchestrator | 2026-04-11 02:55:15.511462 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 02:55:15.511467 | orchestrator | Saturday 11 April 2026 02:55:14 +0000 (0:00:00.692) 0:02:56.098 ******** 2026-04-11 02:55:15.511471 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.511476 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.511480 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 02:55:15.511485 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:15.511489 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:15.511493 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:15.511498 | orchestrator | 2026-04-11 02:55:15.511502 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 02:55:15.511506 | orchestrator | Saturday 11 April 2026 02:55:15 +0000 (0:00:00.982) 0:02:57.080 ******** 2026-04-11 02:55:15.511513 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-11 02:55:15.511521 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-11 02:55:15.511527 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:15.511534 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-11 02:55:34.964968 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-11 02:55:34.965104 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:34.965121 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-11 02:55:34.965152 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-11 02:55:34.965160 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:34.965168 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965176 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965183 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.965190 | orchestrator | 2026-04-11 02:55:34.965199 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 02:55:34.965207 | orchestrator | Saturday 11 April 2026 02:55:15 +0000 (0:00:00.708) 0:02:57.788 ******** 2026-04-11 02:55:34.965215 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.965222 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:34.965229 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:34.965236 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965244 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965251 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.965258 | orchestrator | 2026-04-11 02:55:34.965266 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 02:55:34.965273 | orchestrator | Saturday 11 April 2026 02:55:16 +0000 (0:00:00.966) 0:02:58.755 ******** 2026-04-11 02:55:34.965280 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.965287 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:34.965294 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:34.965301 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965309 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965316 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.965323 | orchestrator | 2026-04-11 02:55:34.965331 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 02:55:34.965352 | orchestrator | Saturday 11 April 2026 02:55:17 +0000 (0:00:00.698) 0:02:59.454 ******** 2026-04-11 02:55:34.965360 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.965367 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:34.965374 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:34.965381 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965388 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965396 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.965403 | orchestrator | 2026-04-11 02:55:34.965410 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 02:55:34.965418 | orchestrator | Saturday 11 April 2026 02:55:18 +0000 (0:00:00.979) 0:03:00.433 ******** 2026-04-11 02:55:34.965430 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.965442 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:34.965453 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:34.965465 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965476 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965488 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.965500 | orchestrator | 2026-04-11 02:55:34.965511 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 02:55:34.965524 | orchestrator | Saturday 11 April 2026 02:55:19 +0000 (0:00:00.963) 0:03:01.397 ******** 2026-04-11 02:55:34.965532 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.965539 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:34.965546 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:34.965553 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965560 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965567 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.965582 | orchestrator | 2026-04-11 02:55:34.965589 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 02:55:34.965596 | orchestrator | Saturday 11 April 2026 02:55:20 +0000 (0:00:00.814) 0:03:02.212 ******** 2026-04-11 02:55:34.965604 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:34.965612 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:34.965619 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:34.965626 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965633 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965640 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.965647 | orchestrator | 2026-04-11 02:55:34.965654 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 02:55:34.965662 | orchestrator | Saturday 11 April 2026 02:55:21 +0000 (0:00:00.944) 0:03:03.156 ******** 2026-04-11 02:55:34.965669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:55:34.965676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:55:34.965683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:55:34.965691 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.965698 | orchestrator | 2026-04-11 02:55:34.965705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 02:55:34.965713 | orchestrator | Saturday 11 April 2026 02:55:21 +0000 (0:00:00.454) 0:03:03.611 ******** 2026-04-11 02:55:34.965736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:55:34.965745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:55:34.965752 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:55:34.965759 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.965766 | orchestrator | 2026-04-11 02:55:34.965774 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 02:55:34.965781 | orchestrator | Saturday 11 April 2026 02:55:22 +0000 (0:00:00.511) 0:03:04.123 ******** 2026-04-11 02:55:34.965788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:55:34.965795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:55:34.965803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:55:34.965810 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.965817 | orchestrator | 2026-04-11 02:55:34.965824 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 02:55:34.965832 | orchestrator | Saturday 11 April 2026 02:55:22 +0000 (0:00:00.473) 0:03:04.597 ******** 2026-04-11 02:55:34.965839 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:34.965846 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:34.965853 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:34.965861 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965868 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965875 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.965882 | orchestrator | 2026-04-11 02:55:34.965889 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 02:55:34.965896 | orchestrator | Saturday 11 April 2026 02:55:23 +0000 (0:00:00.689) 0:03:05.286 ******** 2026-04-11 02:55:34.965904 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 02:55:34.965911 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 02:55:34.965918 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 02:55:34.965926 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-11 02:55:34.965934 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.965947 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-11 02:55:34.965968 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:34.965980 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-11 02:55:34.965992 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:34.966004 | orchestrator | 2026-04-11 02:55:34.966056 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 02:55:34.966179 | orchestrator | Saturday 11 April 2026 02:55:25 +0000 (0:00:02.041) 0:03:07.328 ******** 2026-04-11 02:55:34.966193 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:55:34.966205 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:55:34.966217 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:55:34.966229 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:55:34.966241 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:55:34.966254 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:55:34.966266 | orchestrator | 2026-04-11 02:55:34.966279 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 02:55:34.966292 | orchestrator | Saturday 11 April 2026 02:55:28 +0000 (0:00:02.869) 0:03:10.197 ******** 2026-04-11 02:55:34.966303 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:55:34.966322 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:55:34.966330 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:55:34.966337 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:55:34.966344 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:55:34.966351 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:55:34.966359 | orchestrator | 2026-04-11 02:55:34.966366 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 02:55:34.966373 | orchestrator | Saturday 11 April 2026 02:55:29 +0000 (0:00:01.038) 0:03:11.236 ******** 2026-04-11 02:55:34.966380 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:34.966387 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:34.966394 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:34.966402 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:55:34.966409 | orchestrator | 2026-04-11 02:55:34.966416 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-11 02:55:34.966424 | orchestrator | Saturday 11 April 2026 02:55:30 +0000 (0:00:01.272) 0:03:12.508 ******** 2026-04-11 02:55:34.966431 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:55:34.966439 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:55:34.966446 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:55:34.966453 | orchestrator | 2026-04-11 02:55:34.966460 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-11 02:55:34.966468 | orchestrator | Saturday 11 April 2026 02:55:31 +0000 (0:00:00.407) 0:03:12.916 ******** 2026-04-11 02:55:34.966475 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:55:34.966482 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:55:34.966489 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:55:34.966496 | orchestrator | 2026-04-11 02:55:34.966503 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-11 02:55:34.966510 | orchestrator | Saturday 11 April 2026 02:55:32 +0000 (0:00:01.557) 0:03:14.474 ******** 2026-04-11 02:55:34.966517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 02:55:34.966525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 02:55:34.966532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 02:55:34.966543 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.966560 | orchestrator | 2026-04-11 02:55:34.966575 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-11 02:55:34.966586 | orchestrator | Saturday 11 April 2026 02:55:33 +0000 (0:00:00.719) 0:03:15.193 ******** 2026-04-11 02:55:34.966599 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:55:34.966610 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:55:34.966622 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:55:34.966633 | orchestrator | 2026-04-11 02:55:34.966643 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 02:55:34.966654 | orchestrator | Saturday 11 April 2026 02:55:33 +0000 (0:00:00.341) 0:03:15.535 ******** 2026-04-11 02:55:34.966665 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:34.966690 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:53.061334 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:53.061436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:55:53.061454 | orchestrator | 2026-04-11 02:55:53.061467 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-11 02:55:53.061474 | orchestrator | Saturday 11 April 2026 02:55:34 +0000 (0:00:01.211) 0:03:16.747 ******** 2026-04-11 02:55:53.061480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:55:53.061486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:55:53.061491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:55:53.061497 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061503 | orchestrator | 2026-04-11 02:55:53.061508 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-11 02:55:53.061514 | orchestrator | Saturday 11 April 2026 02:55:35 +0000 (0:00:00.447) 0:03:17.194 ******** 2026-04-11 02:55:53.061519 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061524 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:53.061530 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:53.061535 | orchestrator | 2026-04-11 02:55:53.061541 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-11 02:55:53.061546 | orchestrator | Saturday 11 April 2026 02:55:35 +0000 (0:00:00.395) 0:03:17.590 ******** 2026-04-11 02:55:53.061551 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061557 | orchestrator | 2026-04-11 02:55:53.061563 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-11 02:55:53.061568 | orchestrator | Saturday 11 April 2026 02:55:36 +0000 (0:00:00.274) 0:03:17.864 ******** 2026-04-11 02:55:53.061573 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061579 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:53.061584 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:53.061590 | orchestrator | 2026-04-11 02:55:53.061595 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-11 02:55:53.061601 | orchestrator | Saturday 11 April 2026 02:55:36 +0000 (0:00:00.409) 0:03:18.274 ******** 2026-04-11 02:55:53.061606 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061612 | orchestrator | 2026-04-11 02:55:53.061617 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-11 02:55:53.061622 | orchestrator | Saturday 11 April 2026 02:55:37 +0000 (0:00:00.782) 0:03:19.056 ******** 2026-04-11 02:55:53.061628 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061633 | orchestrator | 2026-04-11 02:55:53.061639 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-11 02:55:53.061644 | orchestrator | Saturday 11 April 2026 02:55:37 +0000 (0:00:00.255) 0:03:19.312 ******** 2026-04-11 02:55:53.061650 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061655 | orchestrator | 2026-04-11 02:55:53.061661 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-11 02:55:53.061666 | orchestrator | Saturday 11 April 2026 02:55:37 +0000 (0:00:00.148) 0:03:19.460 ******** 2026-04-11 02:55:53.061682 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061688 | orchestrator | 2026-04-11 02:55:53.061693 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-11 02:55:53.061699 | orchestrator | Saturday 11 April 2026 02:55:37 +0000 (0:00:00.242) 0:03:19.703 ******** 2026-04-11 02:55:53.061704 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061710 | orchestrator | 2026-04-11 02:55:53.061716 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-11 02:55:53.061721 | orchestrator | Saturday 11 April 2026 02:55:38 +0000 (0:00:00.275) 0:03:19.979 ******** 2026-04-11 02:55:53.061727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:55:53.061732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:55:53.061738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:55:53.061748 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061754 | orchestrator | 2026-04-11 02:55:53.061759 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-11 02:55:53.061765 | orchestrator | Saturday 11 April 2026 02:55:38 +0000 (0:00:00.440) 0:03:20.419 ******** 2026-04-11 02:55:53.061770 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061776 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:53.061781 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:53.061786 | orchestrator | 2026-04-11 02:55:53.061792 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-11 02:55:53.061797 | orchestrator | Saturday 11 April 2026 02:55:39 +0000 (0:00:00.407) 0:03:20.827 ******** 2026-04-11 02:55:53.061803 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061808 | orchestrator | 2026-04-11 02:55:53.061814 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-11 02:55:53.061819 | orchestrator | Saturday 11 April 2026 02:55:39 +0000 (0:00:00.248) 0:03:21.076 ******** 2026-04-11 02:55:53.061824 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061830 | orchestrator | 2026-04-11 02:55:53.061835 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 02:55:53.061841 | orchestrator | Saturday 11 April 2026 02:55:39 +0000 (0:00:00.238) 0:03:21.315 ******** 2026-04-11 02:55:53.061846 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:53.061852 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:53.061857 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:53.061862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:55:53.061868 | orchestrator | 2026-04-11 02:55:53.061874 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-11 02:55:53.061879 | orchestrator | Saturday 11 April 2026 02:55:40 +0000 (0:00:01.236) 0:03:22.551 ******** 2026-04-11 02:55:53.061885 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:53.061891 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:53.061897 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:53.061902 | orchestrator | 2026-04-11 02:55:53.061920 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-11 02:55:53.061926 | orchestrator | Saturday 11 April 2026 02:55:41 +0000 (0:00:00.373) 0:03:22.924 ******** 2026-04-11 02:55:53.061931 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:55:53.061937 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:55:53.061942 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:55:53.061947 | orchestrator | 2026-04-11 02:55:53.061953 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-11 02:55:53.061958 | orchestrator | Saturday 11 April 2026 02:55:42 +0000 (0:00:01.551) 0:03:24.475 ******** 2026-04-11 02:55:53.061964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:55:53.061969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:55:53.061975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:55:53.061980 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.061985 | orchestrator | 2026-04-11 02:55:53.061991 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-11 02:55:53.061997 | orchestrator | Saturday 11 April 2026 02:55:43 +0000 (0:00:00.713) 0:03:25.189 ******** 2026-04-11 02:55:53.062002 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:53.062007 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:53.062013 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:53.062068 | orchestrator | 2026-04-11 02:55:53.062092 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 02:55:53.062101 | orchestrator | Saturday 11 April 2026 02:55:43 +0000 (0:00:00.402) 0:03:25.592 ******** 2026-04-11 02:55:53.062110 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:53.062119 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:53.062128 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:53.062144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:55:53.062150 | orchestrator | 2026-04-11 02:55:53.062156 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-11 02:55:53.062161 | orchestrator | Saturday 11 April 2026 02:55:44 +0000 (0:00:01.150) 0:03:26.742 ******** 2026-04-11 02:55:53.062167 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:53.062172 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:53.062177 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:53.062183 | orchestrator | 2026-04-11 02:55:53.062188 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-11 02:55:53.062194 | orchestrator | Saturday 11 April 2026 02:55:45 +0000 (0:00:00.413) 0:03:27.156 ******** 2026-04-11 02:55:53.062199 | orchestrator | changed: [testbed-node-3] 2026-04-11 02:55:53.062204 | orchestrator | changed: [testbed-node-4] 2026-04-11 02:55:53.062210 | orchestrator | changed: [testbed-node-5] 2026-04-11 02:55:53.062215 | orchestrator | 2026-04-11 02:55:53.062220 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-11 02:55:53.062226 | orchestrator | Saturday 11 April 2026 02:55:46 +0000 (0:00:01.263) 0:03:28.419 ******** 2026-04-11 02:55:53.062231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 02:55:53.062241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 02:55:53.062247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 02:55:53.062252 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.062258 | orchestrator | 2026-04-11 02:55:53.062263 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-11 02:55:53.062268 | orchestrator | Saturday 11 April 2026 02:55:47 +0000 (0:00:00.990) 0:03:29.410 ******** 2026-04-11 02:55:53.062274 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:55:53.062279 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:55:53.062285 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:55:53.062290 | orchestrator | 2026-04-11 02:55:53.062295 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-11 02:55:53.062301 | orchestrator | Saturday 11 April 2026 02:55:48 +0000 (0:00:00.593) 0:03:30.004 ******** 2026-04-11 02:55:53.062306 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.062312 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:53.062317 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:53.062322 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:55:53.062328 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:55:53.062333 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:55:53.062338 | orchestrator | 2026-04-11 02:55:53.062344 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 02:55:53.062349 | orchestrator | Saturday 11 April 2026 02:55:48 +0000 (0:00:00.710) 0:03:30.714 ******** 2026-04-11 02:55:53.062355 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:55:53.062360 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:55:53.062366 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:55:53.062371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:55:53.062376 | orchestrator | 2026-04-11 02:55:53.062382 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-11 02:55:53.062387 | orchestrator | Saturday 11 April 2026 02:55:50 +0000 (0:00:01.217) 0:03:31.932 ******** 2026-04-11 02:55:53.062393 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:55:53.062398 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:55:53.062403 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:55:53.062409 | orchestrator | 2026-04-11 02:55:53.062414 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-11 02:55:53.062419 | orchestrator | Saturday 11 April 2026 02:55:50 +0000 (0:00:00.391) 0:03:32.323 ******** 2026-04-11 02:55:53.062425 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:55:53.062434 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:55:53.062440 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:55:53.062445 | orchestrator | 2026-04-11 02:55:53.062450 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-11 02:55:53.062456 | orchestrator | Saturday 11 April 2026 02:55:51 +0000 (0:00:01.243) 0:03:33.567 ******** 2026-04-11 02:55:53.062461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 02:55:53.062467 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 02:55:53.062477 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 02:56:11.538809 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.538909 | orchestrator | 2026-04-11 02:56:11.538921 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-11 02:56:11.538933 | orchestrator | Saturday 11 April 2026 02:55:53 +0000 (0:00:01.270) 0:03:34.838 ******** 2026-04-11 02:56:11.538939 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.538947 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.538953 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.538959 | orchestrator | 2026-04-11 02:56:11.538966 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-11 02:56:11.538972 | orchestrator | 2026-04-11 02:56:11.538979 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 02:56:11.538986 | orchestrator | Saturday 11 April 2026 02:55:53 +0000 (0:00:00.669) 0:03:35.508 ******** 2026-04-11 02:56:11.538993 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:56:11.538998 | orchestrator | 2026-04-11 02:56:11.539003 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 02:56:11.539009 | orchestrator | Saturday 11 April 2026 02:55:54 +0000 (0:00:00.916) 0:03:36.424 ******** 2026-04-11 02:56:11.539015 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:56:11.539022 | orchestrator | 2026-04-11 02:56:11.539027 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 02:56:11.539031 | orchestrator | Saturday 11 April 2026 02:55:55 +0000 (0:00:00.692) 0:03:37.117 ******** 2026-04-11 02:56:11.539035 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539040 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539044 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539060 | orchestrator | 2026-04-11 02:56:11.539065 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 02:56:11.539069 | orchestrator | Saturday 11 April 2026 02:55:56 +0000 (0:00:00.836) 0:03:37.954 ******** 2026-04-11 02:56:11.539078 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539083 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539087 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539091 | orchestrator | 2026-04-11 02:56:11.539127 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 02:56:11.539134 | orchestrator | Saturday 11 April 2026 02:55:56 +0000 (0:00:00.744) 0:03:38.698 ******** 2026-04-11 02:56:11.539140 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539145 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539151 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539160 | orchestrator | 2026-04-11 02:56:11.539168 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 02:56:11.539174 | orchestrator | Saturday 11 April 2026 02:55:57 +0000 (0:00:00.382) 0:03:39.081 ******** 2026-04-11 02:56:11.539180 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539187 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539208 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539214 | orchestrator | 2026-04-11 02:56:11.539219 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 02:56:11.539226 | orchestrator | Saturday 11 April 2026 02:55:57 +0000 (0:00:00.398) 0:03:39.479 ******** 2026-04-11 02:56:11.539253 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539260 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539268 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539274 | orchestrator | 2026-04-11 02:56:11.539280 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 02:56:11.539284 | orchestrator | Saturday 11 April 2026 02:55:58 +0000 (0:00:00.751) 0:03:40.231 ******** 2026-04-11 02:56:11.539288 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539292 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539296 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539299 | orchestrator | 2026-04-11 02:56:11.539303 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 02:56:11.539307 | orchestrator | Saturday 11 April 2026 02:55:59 +0000 (0:00:00.641) 0:03:40.873 ******** 2026-04-11 02:56:11.539311 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539315 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539319 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539323 | orchestrator | 2026-04-11 02:56:11.539327 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 02:56:11.539330 | orchestrator | Saturday 11 April 2026 02:55:59 +0000 (0:00:00.369) 0:03:41.243 ******** 2026-04-11 02:56:11.539334 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539338 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539342 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539345 | orchestrator | 2026-04-11 02:56:11.539349 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 02:56:11.539353 | orchestrator | Saturday 11 April 2026 02:56:00 +0000 (0:00:00.787) 0:03:42.030 ******** 2026-04-11 02:56:11.539357 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539361 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539364 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539368 | orchestrator | 2026-04-11 02:56:11.539372 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 02:56:11.539376 | orchestrator | Saturday 11 April 2026 02:56:01 +0000 (0:00:00.803) 0:03:42.834 ******** 2026-04-11 02:56:11.539380 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539384 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539388 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539393 | orchestrator | 2026-04-11 02:56:11.539397 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 02:56:11.539402 | orchestrator | Saturday 11 April 2026 02:56:01 +0000 (0:00:00.625) 0:03:43.459 ******** 2026-04-11 02:56:11.539406 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539411 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539415 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539419 | orchestrator | 2026-04-11 02:56:11.539424 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 02:56:11.539428 | orchestrator | Saturday 11 April 2026 02:56:02 +0000 (0:00:00.362) 0:03:43.822 ******** 2026-04-11 02:56:11.539444 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539449 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539453 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539457 | orchestrator | 2026-04-11 02:56:11.539462 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 02:56:11.539466 | orchestrator | Saturday 11 April 2026 02:56:02 +0000 (0:00:00.363) 0:03:44.185 ******** 2026-04-11 02:56:11.539470 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539475 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539479 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539484 | orchestrator | 2026-04-11 02:56:11.539488 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 02:56:11.539492 | orchestrator | Saturday 11 April 2026 02:56:02 +0000 (0:00:00.320) 0:03:44.505 ******** 2026-04-11 02:56:11.539497 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539506 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539510 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539514 | orchestrator | 2026-04-11 02:56:11.539518 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 02:56:11.539523 | orchestrator | Saturday 11 April 2026 02:56:03 +0000 (0:00:00.636) 0:03:45.141 ******** 2026-04-11 02:56:11.539527 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539532 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539536 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539540 | orchestrator | 2026-04-11 02:56:11.539545 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 02:56:11.539549 | orchestrator | Saturday 11 April 2026 02:56:03 +0000 (0:00:00.350) 0:03:45.492 ******** 2026-04-11 02:56:11.539553 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539558 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:56:11.539562 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:56:11.539567 | orchestrator | 2026-04-11 02:56:11.539571 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 02:56:11.539576 | orchestrator | Saturday 11 April 2026 02:56:04 +0000 (0:00:00.370) 0:03:45.862 ******** 2026-04-11 02:56:11.539580 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539585 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539589 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539593 | orchestrator | 2026-04-11 02:56:11.539598 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 02:56:11.539602 | orchestrator | Saturday 11 April 2026 02:56:04 +0000 (0:00:00.358) 0:03:46.221 ******** 2026-04-11 02:56:11.539606 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539611 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539615 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539619 | orchestrator | 2026-04-11 02:56:11.539624 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 02:56:11.539628 | orchestrator | Saturday 11 April 2026 02:56:05 +0000 (0:00:00.710) 0:03:46.932 ******** 2026-04-11 02:56:11.539633 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539637 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539641 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539645 | orchestrator | 2026-04-11 02:56:11.539661 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-11 02:56:11.539666 | orchestrator | Saturday 11 April 2026 02:56:05 +0000 (0:00:00.607) 0:03:47.539 ******** 2026-04-11 02:56:11.539670 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539675 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539679 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539683 | orchestrator | 2026-04-11 02:56:11.539688 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-11 02:56:11.539692 | orchestrator | Saturday 11 April 2026 02:56:06 +0000 (0:00:00.363) 0:03:47.902 ******** 2026-04-11 02:56:11.539698 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:56:11.539702 | orchestrator | 2026-04-11 02:56:11.539707 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-11 02:56:11.539711 | orchestrator | Saturday 11 April 2026 02:56:07 +0000 (0:00:00.966) 0:03:48.868 ******** 2026-04-11 02:56:11.539716 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:56:11.539720 | orchestrator | 2026-04-11 02:56:11.539724 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-11 02:56:11.539729 | orchestrator | Saturday 11 April 2026 02:56:07 +0000 (0:00:00.191) 0:03:49.059 ******** 2026-04-11 02:56:11.539733 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 02:56:11.539738 | orchestrator | 2026-04-11 02:56:11.539742 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-11 02:56:11.539746 | orchestrator | Saturday 11 April 2026 02:56:08 +0000 (0:00:01.140) 0:03:50.200 ******** 2026-04-11 02:56:11.539754 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539758 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539762 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539765 | orchestrator | 2026-04-11 02:56:11.539769 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-11 02:56:11.539773 | orchestrator | Saturday 11 April 2026 02:56:08 +0000 (0:00:00.377) 0:03:50.577 ******** 2026-04-11 02:56:11.539777 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:56:11.539781 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:56:11.539784 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:56:11.539788 | orchestrator | 2026-04-11 02:56:11.539792 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-11 02:56:11.539796 | orchestrator | Saturday 11 April 2026 02:56:09 +0000 (0:00:00.695) 0:03:51.273 ******** 2026-04-11 02:56:11.539800 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:56:11.539804 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:56:11.539807 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:56:11.539811 | orchestrator | 2026-04-11 02:56:11.539815 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-11 02:56:11.539819 | orchestrator | Saturday 11 April 2026 02:56:10 +0000 (0:00:01.260) 0:03:52.534 ******** 2026-04-11 02:56:11.539823 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:56:11.539827 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:56:11.539830 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:56:11.539834 | orchestrator | 2026-04-11 02:56:11.539841 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-11 02:57:22.697442 | orchestrator | Saturday 11 April 2026 02:56:11 +0000 (0:00:00.787) 0:03:53.321 ******** 2026-04-11 02:57:22.697555 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.697575 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:57:22.697587 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:57:22.697599 | orchestrator | 2026-04-11 02:57:22.697612 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-11 02:57:22.697625 | orchestrator | Saturday 11 April 2026 02:56:12 +0000 (0:00:00.690) 0:03:54.012 ******** 2026-04-11 02:57:22.697637 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:22.697652 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:22.697665 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:22.697676 | orchestrator | 2026-04-11 02:57:22.697690 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-11 02:57:22.697703 | orchestrator | Saturday 11 April 2026 02:56:13 +0000 (0:00:01.070) 0:03:55.082 ******** 2026-04-11 02:57:22.697715 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.697729 | orchestrator | 2026-04-11 02:57:22.697742 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-11 02:57:22.697755 | orchestrator | Saturday 11 April 2026 02:56:15 +0000 (0:00:02.374) 0:03:57.457 ******** 2026-04-11 02:57:22.697768 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:22.697781 | orchestrator | 2026-04-11 02:57:22.697794 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-11 02:57:22.697803 | orchestrator | Saturday 11 April 2026 02:56:16 +0000 (0:00:00.734) 0:03:58.191 ******** 2026-04-11 02:57:22.697811 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 02:57:22.697819 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 02:57:22.697826 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 02:57:22.697834 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 02:57:22.697842 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-11 02:57:22.697849 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 02:57:22.697857 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 02:57:22.697864 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-11 02:57:22.697872 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 02:57:22.697902 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-11 02:57:22.697910 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-11 02:57:22.697917 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-11 02:57:22.697924 | orchestrator | 2026-04-11 02:57:22.697932 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-11 02:57:22.697939 | orchestrator | Saturday 11 April 2026 02:56:19 +0000 (0:00:03.048) 0:04:01.239 ******** 2026-04-11 02:57:22.697946 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.697953 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:57:22.697975 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:57:22.697988 | orchestrator | 2026-04-11 02:57:22.698000 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-11 02:57:22.698061 | orchestrator | Saturday 11 April 2026 02:56:20 +0000 (0:00:01.248) 0:04:02.488 ******** 2026-04-11 02:57:22.698077 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:22.698091 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:22.698105 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:22.698119 | orchestrator | 2026-04-11 02:57:22.698129 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-11 02:57:22.698138 | orchestrator | Saturday 11 April 2026 02:56:21 +0000 (0:00:00.650) 0:04:03.139 ******** 2026-04-11 02:57:22.698146 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:22.698154 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:22.698197 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:22.698206 | orchestrator | 2026-04-11 02:57:22.698214 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-11 02:57:22.698223 | orchestrator | Saturday 11 April 2026 02:56:21 +0000 (0:00:00.412) 0:04:03.551 ******** 2026-04-11 02:57:22.698231 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.698239 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:57:22.698256 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:57:22.698264 | orchestrator | 2026-04-11 02:57:22.698273 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-11 02:57:22.698281 | orchestrator | Saturday 11 April 2026 02:56:23 +0000 (0:00:01.494) 0:04:05.046 ******** 2026-04-11 02:57:22.698289 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.698298 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:57:22.698306 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:57:22.698314 | orchestrator | 2026-04-11 02:57:22.698322 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-11 02:57:22.698330 | orchestrator | Saturday 11 April 2026 02:56:24 +0000 (0:00:01.291) 0:04:06.337 ******** 2026-04-11 02:57:22.698339 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:22.698347 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:22.698355 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:22.698364 | orchestrator | 2026-04-11 02:57:22.698371 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-11 02:57:22.698379 | orchestrator | Saturday 11 April 2026 02:56:25 +0000 (0:00:00.634) 0:04:06.971 ******** 2026-04-11 02:57:22.698386 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:57:22.698394 | orchestrator | 2026-04-11 02:57:22.698401 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-11 02:57:22.698408 | orchestrator | Saturday 11 April 2026 02:56:25 +0000 (0:00:00.583) 0:04:07.555 ******** 2026-04-11 02:57:22.698416 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:22.698423 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:22.698430 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:22.698437 | orchestrator | 2026-04-11 02:57:22.698444 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-11 02:57:22.698468 | orchestrator | Saturday 11 April 2026 02:56:26 +0000 (0:00:00.323) 0:04:07.879 ******** 2026-04-11 02:57:22.698475 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:22.698494 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:22.698502 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:22.698509 | orchestrator | 2026-04-11 02:57:22.698516 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-11 02:57:22.698524 | orchestrator | Saturday 11 April 2026 02:56:26 +0000 (0:00:00.633) 0:04:08.512 ******** 2026-04-11 02:57:22.698531 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:57:22.698539 | orchestrator | 2026-04-11 02:57:22.698546 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-11 02:57:22.698553 | orchestrator | Saturday 11 April 2026 02:56:27 +0000 (0:00:00.660) 0:04:09.172 ******** 2026-04-11 02:57:22.698560 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.698568 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:57:22.698575 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:57:22.698582 | orchestrator | 2026-04-11 02:57:22.698589 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-11 02:57:22.698596 | orchestrator | Saturday 11 April 2026 02:56:29 +0000 (0:00:01.947) 0:04:11.120 ******** 2026-04-11 02:57:22.698604 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.698611 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:57:22.698618 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:57:22.698625 | orchestrator | 2026-04-11 02:57:22.698633 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-11 02:57:22.698640 | orchestrator | Saturday 11 April 2026 02:56:30 +0000 (0:00:01.553) 0:04:12.674 ******** 2026-04-11 02:57:22.698647 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.698654 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:57:22.698662 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:57:22.698669 | orchestrator | 2026-04-11 02:57:22.698676 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-11 02:57:22.698683 | orchestrator | Saturday 11 April 2026 02:56:32 +0000 (0:00:01.798) 0:04:14.473 ******** 2026-04-11 02:57:22.698690 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:57:22.698698 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:57:22.698705 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:57:22.698712 | orchestrator | 2026-04-11 02:57:22.698720 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-11 02:57:22.698727 | orchestrator | Saturday 11 April 2026 02:56:34 +0000 (0:00:02.167) 0:04:16.640 ******** 2026-04-11 02:57:22.698734 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:57:22.698741 | orchestrator | 2026-04-11 02:57:22.698749 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-11 02:57:22.698761 | orchestrator | Saturday 11 April 2026 02:56:35 +0000 (0:00:00.911) 0:04:17.551 ******** 2026-04-11 02:57:22.698784 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-11 02:57:22.698802 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:22.698815 | orchestrator | 2026-04-11 02:57:22.698827 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-11 02:57:22.698840 | orchestrator | Saturday 11 April 2026 02:56:57 +0000 (0:00:21.873) 0:04:39.425 ******** 2026-04-11 02:57:22.698850 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:22.698863 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:22.698876 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:22.698888 | orchestrator | 2026-04-11 02:57:22.698901 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-11 02:57:22.698915 | orchestrator | Saturday 11 April 2026 02:57:07 +0000 (0:00:09.651) 0:04:49.076 ******** 2026-04-11 02:57:22.698929 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:22.698941 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:22.698954 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:22.698970 | orchestrator | 2026-04-11 02:57:22.698978 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-11 02:57:22.698985 | orchestrator | Saturday 11 April 2026 02:57:07 +0000 (0:00:00.375) 0:04:49.452 ******** 2026-04-11 02:57:22.698995 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37b253559892ee540747c3c4731aca2733846549'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-11 02:57:22.699005 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37b253559892ee540747c3c4731aca2733846549'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-11 02:57:22.699014 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37b253559892ee540747c3c4731aca2733846549'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-11 02:57:22.699031 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37b253559892ee540747c3c4731aca2733846549'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-11 02:57:37.607916 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37b253559892ee540747c3c4731aca2733846549'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-11 02:57:37.608029 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__37b253559892ee540747c3c4731aca2733846549'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__37b253559892ee540747c3c4731aca2733846549'}])  2026-04-11 02:57:37.608049 | orchestrator | 2026-04-11 02:57:37.608063 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 02:57:37.608076 | orchestrator | Saturday 11 April 2026 02:57:22 +0000 (0:00:15.024) 0:05:04.477 ******** 2026-04-11 02:57:37.608088 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.608100 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.608112 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.608123 | orchestrator | 2026-04-11 02:57:37.608134 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 02:57:37.608145 | orchestrator | Saturday 11 April 2026 02:57:23 +0000 (0:00:00.397) 0:05:04.874 ******** 2026-04-11 02:57:37.608157 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:57:37.608169 | orchestrator | 2026-04-11 02:57:37.608268 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-11 02:57:37.608286 | orchestrator | Saturday 11 April 2026 02:57:23 +0000 (0:00:00.854) 0:05:05.729 ******** 2026-04-11 02:57:37.608304 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.608321 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.608337 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.608354 | orchestrator | 2026-04-11 02:57:37.608406 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-11 02:57:37.608444 | orchestrator | Saturday 11 April 2026 02:57:24 +0000 (0:00:00.369) 0:05:06.098 ******** 2026-04-11 02:57:37.608464 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.608481 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.608500 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.608518 | orchestrator | 2026-04-11 02:57:37.608535 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-11 02:57:37.608554 | orchestrator | Saturday 11 April 2026 02:57:24 +0000 (0:00:00.393) 0:05:06.492 ******** 2026-04-11 02:57:37.608572 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 02:57:37.608591 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 02:57:37.608608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 02:57:37.608629 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.608648 | orchestrator | 2026-04-11 02:57:37.608667 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-11 02:57:37.608685 | orchestrator | Saturday 11 April 2026 02:57:25 +0000 (0:00:01.017) 0:05:07.509 ******** 2026-04-11 02:57:37.608704 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.608722 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.608737 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.608754 | orchestrator | 2026-04-11 02:57:37.608806 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-11 02:57:37.608823 | orchestrator | 2026-04-11 02:57:37.608839 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 02:57:37.608856 | orchestrator | Saturday 11 April 2026 02:57:26 +0000 (0:00:01.022) 0:05:08.531 ******** 2026-04-11 02:57:37.608876 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:57:37.608895 | orchestrator | 2026-04-11 02:57:37.608913 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 02:57:37.608930 | orchestrator | Saturday 11 April 2026 02:57:27 +0000 (0:00:00.586) 0:05:09.118 ******** 2026-04-11 02:57:37.608947 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:57:37.608964 | orchestrator | 2026-04-11 02:57:37.608982 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 02:57:37.609001 | orchestrator | Saturday 11 April 2026 02:57:28 +0000 (0:00:00.894) 0:05:10.012 ******** 2026-04-11 02:57:37.609018 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.609034 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.609051 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.609068 | orchestrator | 2026-04-11 02:57:37.609085 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 02:57:37.609102 | orchestrator | Saturday 11 April 2026 02:57:28 +0000 (0:00:00.750) 0:05:10.762 ******** 2026-04-11 02:57:37.609120 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.609136 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.609154 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.609171 | orchestrator | 2026-04-11 02:57:37.609222 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 02:57:37.609240 | orchestrator | Saturday 11 April 2026 02:57:29 +0000 (0:00:00.351) 0:05:11.114 ******** 2026-04-11 02:57:37.609257 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.609275 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.609294 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.609312 | orchestrator | 2026-04-11 02:57:37.609361 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 02:57:37.609381 | orchestrator | Saturday 11 April 2026 02:57:29 +0000 (0:00:00.610) 0:05:11.725 ******** 2026-04-11 02:57:37.609400 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.609419 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.609461 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.609478 | orchestrator | 2026-04-11 02:57:37.609495 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 02:57:37.609513 | orchestrator | Saturday 11 April 2026 02:57:30 +0000 (0:00:00.360) 0:05:12.085 ******** 2026-04-11 02:57:37.609530 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.609549 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.609567 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.609585 | orchestrator | 2026-04-11 02:57:37.609602 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 02:57:37.609621 | orchestrator | Saturday 11 April 2026 02:57:31 +0000 (0:00:00.764) 0:05:12.850 ******** 2026-04-11 02:57:37.609638 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.609656 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.609674 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.609692 | orchestrator | 2026-04-11 02:57:37.609711 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 02:57:37.609728 | orchestrator | Saturday 11 April 2026 02:57:31 +0000 (0:00:00.363) 0:05:13.213 ******** 2026-04-11 02:57:37.609746 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.609765 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.609783 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.609802 | orchestrator | 2026-04-11 02:57:37.609818 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 02:57:37.609837 | orchestrator | Saturday 11 April 2026 02:57:32 +0000 (0:00:00.654) 0:05:13.868 ******** 2026-04-11 02:57:37.609853 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.609870 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.609888 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.609906 | orchestrator | 2026-04-11 02:57:37.609923 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 02:57:37.609942 | orchestrator | Saturday 11 April 2026 02:57:32 +0000 (0:00:00.802) 0:05:14.670 ******** 2026-04-11 02:57:37.609962 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.609981 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.609999 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.610111 | orchestrator | 2026-04-11 02:57:37.610142 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 02:57:37.610161 | orchestrator | Saturday 11 April 2026 02:57:33 +0000 (0:00:00.837) 0:05:15.508 ******** 2026-04-11 02:57:37.610226 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.610265 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.610286 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.610298 | orchestrator | 2026-04-11 02:57:37.610309 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 02:57:37.610320 | orchestrator | Saturday 11 April 2026 02:57:34 +0000 (0:00:00.341) 0:05:15.849 ******** 2026-04-11 02:57:37.610331 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.610342 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.610353 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.610363 | orchestrator | 2026-04-11 02:57:37.610374 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 02:57:37.610385 | orchestrator | Saturday 11 April 2026 02:57:34 +0000 (0:00:00.692) 0:05:16.542 ******** 2026-04-11 02:57:37.610396 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.610407 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.610418 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.610429 | orchestrator | 2026-04-11 02:57:37.610440 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 02:57:37.610451 | orchestrator | Saturday 11 April 2026 02:57:35 +0000 (0:00:00.361) 0:05:16.903 ******** 2026-04-11 02:57:37.610461 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.610472 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.610483 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.610508 | orchestrator | 2026-04-11 02:57:37.610519 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 02:57:37.610530 | orchestrator | Saturday 11 April 2026 02:57:35 +0000 (0:00:00.351) 0:05:17.255 ******** 2026-04-11 02:57:37.610541 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.610552 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.610563 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.610573 | orchestrator | 2026-04-11 02:57:37.610584 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 02:57:37.610595 | orchestrator | Saturday 11 April 2026 02:57:35 +0000 (0:00:00.349) 0:05:17.605 ******** 2026-04-11 02:57:37.610606 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.610616 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.610627 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.610644 | orchestrator | 2026-04-11 02:57:37.610663 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 02:57:37.610681 | orchestrator | Saturday 11 April 2026 02:57:36 +0000 (0:00:00.662) 0:05:18.268 ******** 2026-04-11 02:57:37.610708 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:57:37.610727 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:57:37.610744 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:57:37.610760 | orchestrator | 2026-04-11 02:57:37.610778 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 02:57:37.610795 | orchestrator | Saturday 11 April 2026 02:57:36 +0000 (0:00:00.389) 0:05:18.658 ******** 2026-04-11 02:57:37.610812 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.610828 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.610846 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.610863 | orchestrator | 2026-04-11 02:57:37.610880 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 02:57:37.610899 | orchestrator | Saturday 11 April 2026 02:57:37 +0000 (0:00:00.367) 0:05:19.025 ******** 2026-04-11 02:57:37.610915 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:57:37.610933 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:57:37.610950 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:57:37.610967 | orchestrator | 2026-04-11 02:57:37.610986 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 02:57:37.611029 | orchestrator | Saturday 11 April 2026 02:57:37 +0000 (0:00:00.359) 0:05:19.384 ******** 2026-04-11 02:58:46.441550 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:58:46.441684 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:58:46.441711 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:58:46.441730 | orchestrator | 2026-04-11 02:58:46.441749 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-11 02:58:46.441768 | orchestrator | Saturday 11 April 2026 02:57:38 +0000 (0:00:00.997) 0:05:20.382 ******** 2026-04-11 02:58:46.441785 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 02:58:46.441801 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 02:58:46.441845 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 02:58:46.441860 | orchestrator | 2026-04-11 02:58:46.441876 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-11 02:58:46.441894 | orchestrator | Saturday 11 April 2026 02:57:39 +0000 (0:00:00.716) 0:05:21.098 ******** 2026-04-11 02:58:46.441910 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:58:46.441927 | orchestrator | 2026-04-11 02:58:46.441944 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-11 02:58:46.441961 | orchestrator | Saturday 11 April 2026 02:57:40 +0000 (0:00:00.851) 0:05:21.950 ******** 2026-04-11 02:58:46.441977 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:58:46.441995 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:58:46.442006 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:58:46.442078 | orchestrator | 2026-04-11 02:58:46.442122 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-11 02:58:46.442134 | orchestrator | Saturday 11 April 2026 02:57:40 +0000 (0:00:00.769) 0:05:22.720 ******** 2026-04-11 02:58:46.442145 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:58:46.442156 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:58:46.442167 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:58:46.442177 | orchestrator | 2026-04-11 02:58:46.442186 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-11 02:58:46.442197 | orchestrator | Saturday 11 April 2026 02:57:41 +0000 (0:00:00.379) 0:05:23.100 ******** 2026-04-11 02:58:46.442207 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 02:58:46.442217 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 02:58:46.442226 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 02:58:46.442279 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-11 02:58:46.442290 | orchestrator | 2026-04-11 02:58:46.442313 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-11 02:58:46.442323 | orchestrator | Saturday 11 April 2026 02:57:52 +0000 (0:00:10.817) 0:05:33.918 ******** 2026-04-11 02:58:46.442332 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:58:46.442342 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:58:46.442351 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:58:46.442361 | orchestrator | 2026-04-11 02:58:46.442370 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-11 02:58:46.442379 | orchestrator | Saturday 11 April 2026 02:57:52 +0000 (0:00:00.428) 0:05:34.346 ******** 2026-04-11 02:58:46.442389 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-11 02:58:46.442398 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 02:58:46.442408 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 02:58:46.442418 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 02:58:46.442427 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 02:58:46.442436 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 02:58:46.442446 | orchestrator | 2026-04-11 02:58:46.442455 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-11 02:58:46.442465 | orchestrator | Saturday 11 April 2026 02:57:55 +0000 (0:00:02.609) 0:05:36.955 ******** 2026-04-11 02:58:46.442474 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-11 02:58:46.442483 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 02:58:46.442493 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 02:58:46.442502 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 02:58:46.442511 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-11 02:58:46.442521 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-11 02:58:46.442530 | orchestrator | 2026-04-11 02:58:46.442540 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-11 02:58:46.442549 | orchestrator | Saturday 11 April 2026 02:57:56 +0000 (0:00:01.300) 0:05:38.256 ******** 2026-04-11 02:58:46.442558 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:58:46.442568 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:58:46.442577 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:58:46.442587 | orchestrator | 2026-04-11 02:58:46.442596 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-11 02:58:46.442606 | orchestrator | Saturday 11 April 2026 02:57:57 +0000 (0:00:00.713) 0:05:38.970 ******** 2026-04-11 02:58:46.442615 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:58:46.442624 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:58:46.442634 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:58:46.442643 | orchestrator | 2026-04-11 02:58:46.442653 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-11 02:58:46.442662 | orchestrator | Saturday 11 April 2026 02:57:57 +0000 (0:00:00.354) 0:05:39.325 ******** 2026-04-11 02:58:46.442679 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:58:46.442689 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:58:46.442699 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:58:46.442708 | orchestrator | 2026-04-11 02:58:46.442717 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-11 02:58:46.442727 | orchestrator | Saturday 11 April 2026 02:57:58 +0000 (0:00:00.642) 0:05:39.967 ******** 2026-04-11 02:58:46.442737 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:58:46.442747 | orchestrator | 2026-04-11 02:58:46.442780 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-11 02:58:46.442791 | orchestrator | Saturday 11 April 2026 02:57:58 +0000 (0:00:00.588) 0:05:40.556 ******** 2026-04-11 02:58:46.442800 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:58:46.442810 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:58:46.442819 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:58:46.442829 | orchestrator | 2026-04-11 02:58:46.442838 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-11 02:58:46.442848 | orchestrator | Saturday 11 April 2026 02:57:59 +0000 (0:00:00.383) 0:05:40.939 ******** 2026-04-11 02:58:46.442857 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:58:46.442872 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:58:46.442888 | orchestrator | skipping: [testbed-node-2] 2026-04-11 02:58:46.442903 | orchestrator | 2026-04-11 02:58:46.442919 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-11 02:58:46.442934 | orchestrator | Saturday 11 April 2026 02:57:59 +0000 (0:00:00.678) 0:05:41.617 ******** 2026-04-11 02:58:46.442950 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:58:46.442966 | orchestrator | 2026-04-11 02:58:46.442983 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-11 02:58:46.443000 | orchestrator | Saturday 11 April 2026 02:58:00 +0000 (0:00:00.633) 0:05:42.250 ******** 2026-04-11 02:58:46.443018 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:58:46.443032 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:58:46.443049 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:58:46.443066 | orchestrator | 2026-04-11 02:58:46.443082 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-11 02:58:46.443100 | orchestrator | Saturday 11 April 2026 02:58:01 +0000 (0:00:01.312) 0:05:43.563 ******** 2026-04-11 02:58:46.443111 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:58:46.443120 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:58:46.443130 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:58:46.443145 | orchestrator | 2026-04-11 02:58:46.443165 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-11 02:58:46.443189 | orchestrator | Saturday 11 April 2026 02:58:03 +0000 (0:00:01.570) 0:05:45.133 ******** 2026-04-11 02:58:46.443205 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:58:46.443219 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:58:46.443299 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:58:46.443319 | orchestrator | 2026-04-11 02:58:46.443334 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-11 02:58:46.443359 | orchestrator | Saturday 11 April 2026 02:58:05 +0000 (0:00:01.819) 0:05:46.953 ******** 2026-04-11 02:58:46.443370 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:58:46.443379 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:58:46.443389 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:58:46.443398 | orchestrator | 2026-04-11 02:58:46.443408 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-11 02:58:46.443417 | orchestrator | Saturday 11 April 2026 02:58:07 +0000 (0:00:02.031) 0:05:48.984 ******** 2026-04-11 02:58:46.443427 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:58:46.443436 | orchestrator | skipping: [testbed-node-1] 2026-04-11 02:58:46.443446 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-11 02:58:46.443466 | orchestrator | 2026-04-11 02:58:46.443476 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-11 02:58:46.443485 | orchestrator | Saturday 11 April 2026 02:58:07 +0000 (0:00:00.744) 0:05:49.728 ******** 2026-04-11 02:58:46.443495 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-11 02:58:46.443504 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-11 02:58:46.443514 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-11 02:58:46.443524 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-11 02:58:46.443534 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-04-11 02:58:46.443549 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 02:58:46.443566 | orchestrator | 2026-04-11 02:58:46.443587 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-11 02:58:46.443610 | orchestrator | Saturday 11 April 2026 02:58:38 +0000 (0:00:30.175) 0:06:19.904 ******** 2026-04-11 02:58:46.443626 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 02:58:46.443642 | orchestrator | 2026-04-11 02:58:46.443657 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-11 02:58:46.443672 | orchestrator | Saturday 11 April 2026 02:58:39 +0000 (0:00:01.297) 0:06:21.202 ******** 2026-04-11 02:58:46.443687 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:58:46.443702 | orchestrator | 2026-04-11 02:58:46.443719 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-11 02:58:46.443736 | orchestrator | Saturday 11 April 2026 02:58:39 +0000 (0:00:00.319) 0:06:21.522 ******** 2026-04-11 02:58:46.443753 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:58:46.443769 | orchestrator | 2026-04-11 02:58:46.443786 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-11 02:58:46.443802 | orchestrator | Saturday 11 April 2026 02:58:39 +0000 (0:00:00.162) 0:06:21.684 ******** 2026-04-11 02:58:46.443818 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-11 02:58:46.443835 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-11 02:58:46.443852 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-11 02:58:46.443868 | orchestrator | 2026-04-11 02:58:46.443899 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-11 02:59:09.387181 | orchestrator | Saturday 11 April 2026 02:58:46 +0000 (0:00:06.537) 0:06:28.222 ******** 2026-04-11 02:59:09.387357 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-11 02:59:09.387378 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-11 02:59:09.387391 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-11 02:59:09.387402 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-11 02:59:09.387413 | orchestrator | 2026-04-11 02:59:09.387425 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 02:59:09.387436 | orchestrator | Saturday 11 April 2026 02:58:51 +0000 (0:00:05.105) 0:06:33.328 ******** 2026-04-11 02:59:09.387447 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:59:09.387458 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:59:09.387469 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:59:09.387480 | orchestrator | 2026-04-11 02:59:09.387491 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 02:59:09.387502 | orchestrator | Saturday 11 April 2026 02:58:52 +0000 (0:00:00.691) 0:06:34.019 ******** 2026-04-11 02:59:09.387513 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 02:59:09.387546 | orchestrator | 2026-04-11 02:59:09.387557 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-11 02:59:09.387568 | orchestrator | Saturday 11 April 2026 02:58:52 +0000 (0:00:00.612) 0:06:34.632 ******** 2026-04-11 02:59:09.387579 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:59:09.387590 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:59:09.387600 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:59:09.387611 | orchestrator | 2026-04-11 02:59:09.387622 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-11 02:59:09.387633 | orchestrator | Saturday 11 April 2026 02:58:53 +0000 (0:00:00.657) 0:06:35.289 ******** 2026-04-11 02:59:09.387643 | orchestrator | changed: [testbed-node-0] 2026-04-11 02:59:09.387654 | orchestrator | changed: [testbed-node-1] 2026-04-11 02:59:09.387665 | orchestrator | changed: [testbed-node-2] 2026-04-11 02:59:09.387675 | orchestrator | 2026-04-11 02:59:09.387686 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-11 02:59:09.387697 | orchestrator | Saturday 11 April 2026 02:58:54 +0000 (0:00:01.195) 0:06:36.485 ******** 2026-04-11 02:59:09.387730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 02:59:09.387741 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 02:59:09.387752 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 02:59:09.387763 | orchestrator | skipping: [testbed-node-0] 2026-04-11 02:59:09.387789 | orchestrator | 2026-04-11 02:59:09.387801 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-11 02:59:09.387811 | orchestrator | Saturday 11 April 2026 02:58:55 +0000 (0:00:00.769) 0:06:37.254 ******** 2026-04-11 02:59:09.387822 | orchestrator | ok: [testbed-node-0] 2026-04-11 02:59:09.387842 | orchestrator | ok: [testbed-node-1] 2026-04-11 02:59:09.387854 | orchestrator | ok: [testbed-node-2] 2026-04-11 02:59:09.387864 | orchestrator | 2026-04-11 02:59:09.387876 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-11 02:59:09.387887 | orchestrator | 2026-04-11 02:59:09.387898 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 02:59:09.387909 | orchestrator | Saturday 11 April 2026 02:58:56 +0000 (0:00:00.630) 0:06:37.885 ******** 2026-04-11 02:59:09.387921 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:59:09.387933 | orchestrator | 2026-04-11 02:59:09.387944 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 02:59:09.387955 | orchestrator | Saturday 11 April 2026 02:58:56 +0000 (0:00:00.891) 0:06:38.777 ******** 2026-04-11 02:59:09.387966 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 02:59:09.387977 | orchestrator | 2026-04-11 02:59:09.387987 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 02:59:09.387998 | orchestrator | Saturday 11 April 2026 02:58:57 +0000 (0:00:00.869) 0:06:39.647 ******** 2026-04-11 02:59:09.388009 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.388020 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.388031 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.388042 | orchestrator | 2026-04-11 02:59:09.388052 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 02:59:09.388063 | orchestrator | Saturday 11 April 2026 02:58:58 +0000 (0:00:00.373) 0:06:40.020 ******** 2026-04-11 02:59:09.388074 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.388085 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.388095 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.388106 | orchestrator | 2026-04-11 02:59:09.388117 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 02:59:09.388128 | orchestrator | Saturday 11 April 2026 02:58:58 +0000 (0:00:00.750) 0:06:40.770 ******** 2026-04-11 02:59:09.388146 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.388157 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.388167 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.388178 | orchestrator | 2026-04-11 02:59:09.388189 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 02:59:09.388200 | orchestrator | Saturday 11 April 2026 02:58:59 +0000 (0:00:00.716) 0:06:41.487 ******** 2026-04-11 02:59:09.388211 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.388222 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.388232 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.388243 | orchestrator | 2026-04-11 02:59:09.388271 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 02:59:09.388283 | orchestrator | Saturday 11 April 2026 02:59:00 +0000 (0:00:01.024) 0:06:42.512 ******** 2026-04-11 02:59:09.388294 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.388305 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.388334 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.388346 | orchestrator | 2026-04-11 02:59:09.388357 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 02:59:09.388368 | orchestrator | Saturday 11 April 2026 02:59:01 +0000 (0:00:00.379) 0:06:42.891 ******** 2026-04-11 02:59:09.388379 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.388390 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.388400 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.388411 | orchestrator | 2026-04-11 02:59:09.388422 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 02:59:09.388433 | orchestrator | Saturday 11 April 2026 02:59:01 +0000 (0:00:00.371) 0:06:43.263 ******** 2026-04-11 02:59:09.388443 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.388454 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.388465 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.388476 | orchestrator | 2026-04-11 02:59:09.388487 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 02:59:09.388498 | orchestrator | Saturday 11 April 2026 02:59:01 +0000 (0:00:00.411) 0:06:43.674 ******** 2026-04-11 02:59:09.388509 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.388520 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.388531 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.388541 | orchestrator | 2026-04-11 02:59:09.388552 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 02:59:09.388563 | orchestrator | Saturday 11 April 2026 02:59:03 +0000 (0:00:01.406) 0:06:45.081 ******** 2026-04-11 02:59:09.388574 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.388585 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.388595 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.388606 | orchestrator | 2026-04-11 02:59:09.388617 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 02:59:09.388628 | orchestrator | Saturday 11 April 2026 02:59:04 +0000 (0:00:00.746) 0:06:45.827 ******** 2026-04-11 02:59:09.388639 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.388650 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.388661 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.388672 | orchestrator | 2026-04-11 02:59:09.388683 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 02:59:09.388693 | orchestrator | Saturday 11 April 2026 02:59:04 +0000 (0:00:00.365) 0:06:46.193 ******** 2026-04-11 02:59:09.388704 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.388715 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.388725 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.388736 | orchestrator | 2026-04-11 02:59:09.388752 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 02:59:09.388764 | orchestrator | Saturday 11 April 2026 02:59:04 +0000 (0:00:00.358) 0:06:46.551 ******** 2026-04-11 02:59:09.388775 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.388792 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.388803 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.388814 | orchestrator | 2026-04-11 02:59:09.388825 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 02:59:09.388836 | orchestrator | Saturday 11 April 2026 02:59:05 +0000 (0:00:00.684) 0:06:47.236 ******** 2026-04-11 02:59:09.388846 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.388857 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.388868 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.388879 | orchestrator | 2026-04-11 02:59:09.388890 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 02:59:09.388901 | orchestrator | Saturday 11 April 2026 02:59:05 +0000 (0:00:00.376) 0:06:47.613 ******** 2026-04-11 02:59:09.388912 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.388923 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.388933 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.388946 | orchestrator | 2026-04-11 02:59:09.388965 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 02:59:09.388986 | orchestrator | Saturday 11 April 2026 02:59:06 +0000 (0:00:00.401) 0:06:48.014 ******** 2026-04-11 02:59:09.389004 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.389022 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.389041 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.389060 | orchestrator | 2026-04-11 02:59:09.389077 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 02:59:09.389094 | orchestrator | Saturday 11 April 2026 02:59:06 +0000 (0:00:00.359) 0:06:48.373 ******** 2026-04-11 02:59:09.389115 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.389134 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.389153 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.389171 | orchestrator | 2026-04-11 02:59:09.389191 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 02:59:09.389212 | orchestrator | Saturday 11 April 2026 02:59:07 +0000 (0:00:00.688) 0:06:49.062 ******** 2026-04-11 02:59:09.389225 | orchestrator | skipping: [testbed-node-3] 2026-04-11 02:59:09.389236 | orchestrator | skipping: [testbed-node-4] 2026-04-11 02:59:09.389247 | orchestrator | skipping: [testbed-node-5] 2026-04-11 02:59:09.389287 | orchestrator | 2026-04-11 02:59:09.389299 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 02:59:09.389310 | orchestrator | Saturday 11 April 2026 02:59:07 +0000 (0:00:00.379) 0:06:49.442 ******** 2026-04-11 02:59:09.389321 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.389332 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.389343 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.389353 | orchestrator | 2026-04-11 02:59:09.389364 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 02:59:09.389375 | orchestrator | Saturday 11 April 2026 02:59:08 +0000 (0:00:00.380) 0:06:49.822 ******** 2026-04-11 02:59:09.389386 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.389397 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.389408 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.389418 | orchestrator | 2026-04-11 02:59:09.389430 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-11 02:59:09.389440 | orchestrator | Saturday 11 April 2026 02:59:08 +0000 (0:00:00.914) 0:06:50.737 ******** 2026-04-11 02:59:09.389451 | orchestrator | ok: [testbed-node-3] 2026-04-11 02:59:09.389462 | orchestrator | ok: [testbed-node-4] 2026-04-11 02:59:09.389473 | orchestrator | ok: [testbed-node-5] 2026-04-11 02:59:09.389483 | orchestrator | 2026-04-11 02:59:09.389504 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-11 03:00:10.279612 | orchestrator | Saturday 11 April 2026 02:59:09 +0000 (0:00:00.430) 0:06:51.167 ******** 2026-04-11 03:00:10.279722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 03:00:10.279735 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 03:00:10.279763 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 03:00:10.279770 | orchestrator | 2026-04-11 03:00:10.279778 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-11 03:00:10.279785 | orchestrator | Saturday 11 April 2026 02:59:10 +0000 (0:00:00.712) 0:06:51.879 ******** 2026-04-11 03:00:10.279793 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:00:10.279805 | orchestrator | 2026-04-11 03:00:10.279816 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-11 03:00:10.279827 | orchestrator | Saturday 11 April 2026 02:59:10 +0000 (0:00:00.865) 0:06:52.744 ******** 2026-04-11 03:00:10.279838 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:10.279848 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:10.279858 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:10.279868 | orchestrator | 2026-04-11 03:00:10.279879 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-11 03:00:10.279889 | orchestrator | Saturday 11 April 2026 02:59:11 +0000 (0:00:00.360) 0:06:53.105 ******** 2026-04-11 03:00:10.279900 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:10.279910 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:10.279920 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:10.279930 | orchestrator | 2026-04-11 03:00:10.279940 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-11 03:00:10.279951 | orchestrator | Saturday 11 April 2026 02:59:11 +0000 (0:00:00.359) 0:06:53.464 ******** 2026-04-11 03:00:10.279963 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:00:10.279975 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:00:10.279986 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:00:10.279997 | orchestrator | 2026-04-11 03:00:10.280009 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-11 03:00:10.280021 | orchestrator | Saturday 11 April 2026 02:59:12 +0000 (0:00:00.675) 0:06:54.139 ******** 2026-04-11 03:00:10.280049 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:00:10.280062 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:00:10.280073 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:00:10.280085 | orchestrator | 2026-04-11 03:00:10.280096 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-11 03:00:10.280108 | orchestrator | Saturday 11 April 2026 02:59:13 +0000 (0:00:00.686) 0:06:54.826 ******** 2026-04-11 03:00:10.280116 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 03:00:10.280125 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 03:00:10.280133 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 03:00:10.280141 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 03:00:10.280149 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 03:00:10.280157 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 03:00:10.280165 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 03:00:10.280172 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 03:00:10.280180 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 03:00:10.280189 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 03:00:10.280197 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 03:00:10.280204 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 03:00:10.280212 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 03:00:10.280228 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 03:00:10.280235 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 03:00:10.280243 | orchestrator | 2026-04-11 03:00:10.280250 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-11 03:00:10.280258 | orchestrator | Saturday 11 April 2026 02:59:15 +0000 (0:00:02.961) 0:06:57.788 ******** 2026-04-11 03:00:10.280269 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:10.280280 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:10.280291 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:10.280355 | orchestrator | 2026-04-11 03:00:10.280368 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-11 03:00:10.280379 | orchestrator | Saturday 11 April 2026 02:59:16 +0000 (0:00:00.351) 0:06:58.139 ******** 2026-04-11 03:00:10.280390 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:00:10.280400 | orchestrator | 2026-04-11 03:00:10.280410 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-11 03:00:10.280421 | orchestrator | Saturday 11 April 2026 02:59:17 +0000 (0:00:00.830) 0:06:58.970 ******** 2026-04-11 03:00:10.280431 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 03:00:10.280461 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 03:00:10.280474 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 03:00:10.280486 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-11 03:00:10.280498 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-11 03:00:10.280510 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-11 03:00:10.280522 | orchestrator | 2026-04-11 03:00:10.280533 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-11 03:00:10.280545 | orchestrator | Saturday 11 April 2026 02:59:18 +0000 (0:00:01.006) 0:06:59.976 ******** 2026-04-11 03:00:10.280557 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:00:10.280568 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 03:00:10.280580 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 03:00:10.280587 | orchestrator | 2026-04-11 03:00:10.280593 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-11 03:00:10.280600 | orchestrator | Saturday 11 April 2026 02:59:20 +0000 (0:00:02.114) 0:07:02.091 ******** 2026-04-11 03:00:10.280607 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 03:00:10.280618 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 03:00:10.280628 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:00:10.280638 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 03:00:10.280648 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 03:00:10.280659 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:00:10.280671 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 03:00:10.280681 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 03:00:10.280692 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:00:10.280698 | orchestrator | 2026-04-11 03:00:10.280709 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-11 03:00:10.280720 | orchestrator | Saturday 11 April 2026 02:59:21 +0000 (0:00:01.163) 0:07:03.254 ******** 2026-04-11 03:00:10.280731 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:00:10.280743 | orchestrator | 2026-04-11 03:00:10.280754 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-11 03:00:10.280773 | orchestrator | Saturday 11 April 2026 02:59:23 +0000 (0:00:02.143) 0:07:05.398 ******** 2026-04-11 03:00:10.280783 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:00:10.280797 | orchestrator | 2026-04-11 03:00:10.280804 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-11 03:00:10.280810 | orchestrator | Saturday 11 April 2026 02:59:24 +0000 (0:00:00.931) 0:07:06.329 ******** 2026-04-11 03:00:10.280818 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}) 2026-04-11 03:00:10.280826 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}) 2026-04-11 03:00:10.280833 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}) 2026-04-11 03:00:10.280840 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}) 2026-04-11 03:00:10.280847 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}) 2026-04-11 03:00:10.280854 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}) 2026-04-11 03:00:10.280861 | orchestrator | 2026-04-11 03:00:10.280867 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-11 03:00:10.280874 | orchestrator | Saturday 11 April 2026 03:00:05 +0000 (0:00:40.886) 0:07:47.216 ******** 2026-04-11 03:00:10.280881 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:10.280888 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:10.280894 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:10.280901 | orchestrator | 2026-04-11 03:00:10.280908 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-11 03:00:10.280915 | orchestrator | Saturday 11 April 2026 03:00:05 +0000 (0:00:00.376) 0:07:47.592 ******** 2026-04-11 03:00:10.280921 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:00:10.280928 | orchestrator | 2026-04-11 03:00:10.280935 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-11 03:00:10.280942 | orchestrator | Saturday 11 April 2026 03:00:06 +0000 (0:00:00.921) 0:07:48.514 ******** 2026-04-11 03:00:10.280948 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:00:10.280955 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:00:10.280962 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:00:10.280969 | orchestrator | 2026-04-11 03:00:10.280975 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-11 03:00:10.280982 | orchestrator | Saturday 11 April 2026 03:00:07 +0000 (0:00:00.743) 0:07:49.257 ******** 2026-04-11 03:00:10.280989 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:00:10.280996 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:00:10.281002 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:00:10.281009 | orchestrator | 2026-04-11 03:00:10.281016 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-11 03:00:10.281029 | orchestrator | Saturday 11 April 2026 03:00:10 +0000 (0:00:02.800) 0:07:52.057 ******** 2026-04-11 03:00:49.166961 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:00:49.167055 | orchestrator | 2026-04-11 03:00:49.167068 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-11 03:00:49.167078 | orchestrator | Saturday 11 April 2026 03:00:11 +0000 (0:00:00.942) 0:07:53.000 ******** 2026-04-11 03:00:49.167086 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:00:49.167094 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:00:49.167102 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:00:49.167110 | orchestrator | 2026-04-11 03:00:49.167138 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-11 03:00:49.167146 | orchestrator | Saturday 11 April 2026 03:00:12 +0000 (0:00:01.272) 0:07:54.273 ******** 2026-04-11 03:00:49.167154 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:00:49.167161 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:00:49.167169 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:00:49.167176 | orchestrator | 2026-04-11 03:00:49.167183 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-11 03:00:49.167191 | orchestrator | Saturday 11 April 2026 03:00:13 +0000 (0:00:01.206) 0:07:55.479 ******** 2026-04-11 03:00:49.167198 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:00:49.167206 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:00:49.167213 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:00:49.167220 | orchestrator | 2026-04-11 03:00:49.167228 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-11 03:00:49.167235 | orchestrator | Saturday 11 April 2026 03:00:15 +0000 (0:00:02.263) 0:07:57.743 ******** 2026-04-11 03:00:49.167243 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167250 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.167257 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:49.167265 | orchestrator | 2026-04-11 03:00:49.167272 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-11 03:00:49.167280 | orchestrator | Saturday 11 April 2026 03:00:16 +0000 (0:00:00.386) 0:07:58.129 ******** 2026-04-11 03:00:49.167287 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167294 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.167302 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:49.167309 | orchestrator | 2026-04-11 03:00:49.167316 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-11 03:00:49.167376 | orchestrator | Saturday 11 April 2026 03:00:16 +0000 (0:00:00.406) 0:07:58.536 ******** 2026-04-11 03:00:49.167386 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-11 03:00:49.167394 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-11 03:00:49.167401 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-11 03:00:49.167408 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 03:00:49.167416 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-11 03:00:49.167423 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-11 03:00:49.167430 | orchestrator | 2026-04-11 03:00:49.167437 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-11 03:00:49.167444 | orchestrator | Saturday 11 April 2026 03:00:17 +0000 (0:00:01.028) 0:07:59.564 ******** 2026-04-11 03:00:49.167452 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-11 03:00:49.167460 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-11 03:00:49.167467 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-11 03:00:49.167475 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-11 03:00:49.167482 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-11 03:00:49.167489 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-11 03:00:49.167498 | orchestrator | 2026-04-11 03:00:49.167506 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-11 03:00:49.167515 | orchestrator | Saturday 11 April 2026 03:00:20 +0000 (0:00:02.605) 0:08:02.170 ******** 2026-04-11 03:00:49.167523 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-11 03:00:49.167532 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-11 03:00:49.167540 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-11 03:00:49.167548 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-11 03:00:49.167556 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-11 03:00:49.167565 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-11 03:00:49.167573 | orchestrator | 2026-04-11 03:00:49.167581 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-11 03:00:49.167590 | orchestrator | Saturday 11 April 2026 03:00:24 +0000 (0:00:03.812) 0:08:05.982 ******** 2026-04-11 03:00:49.167606 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167614 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.167621 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:00:49.167629 | orchestrator | 2026-04-11 03:00:49.167636 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-11 03:00:49.167643 | orchestrator | Saturday 11 April 2026 03:00:26 +0000 (0:00:02.697) 0:08:08.680 ******** 2026-04-11 03:00:49.167651 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167658 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.167665 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-11 03:00:49.167673 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:00:49.167680 | orchestrator | 2026-04-11 03:00:49.167687 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-11 03:00:49.167695 | orchestrator | Saturday 11 April 2026 03:00:39 +0000 (0:00:12.614) 0:08:21.295 ******** 2026-04-11 03:00:49.167702 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167709 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.167716 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:49.167724 | orchestrator | 2026-04-11 03:00:49.167731 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 03:00:49.167743 | orchestrator | Saturday 11 April 2026 03:00:40 +0000 (0:00:01.347) 0:08:22.643 ******** 2026-04-11 03:00:49.167756 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167768 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.167797 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:49.167811 | orchestrator | 2026-04-11 03:00:49.167825 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 03:00:49.167833 | orchestrator | Saturday 11 April 2026 03:00:41 +0000 (0:00:00.361) 0:08:23.004 ******** 2026-04-11 03:00:49.167841 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:00:49.167848 | orchestrator | 2026-04-11 03:00:49.167856 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-11 03:00:49.167863 | orchestrator | Saturday 11 April 2026 03:00:42 +0000 (0:00:00.931) 0:08:23.936 ******** 2026-04-11 03:00:49.167881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 03:00:49.167889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 03:00:49.167896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 03:00:49.167904 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167911 | orchestrator | 2026-04-11 03:00:49.167918 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-11 03:00:49.167925 | orchestrator | Saturday 11 April 2026 03:00:42 +0000 (0:00:00.455) 0:08:24.391 ******** 2026-04-11 03:00:49.167943 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167950 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.167957 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:49.167965 | orchestrator | 2026-04-11 03:00:49.167972 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-11 03:00:49.167979 | orchestrator | Saturday 11 April 2026 03:00:42 +0000 (0:00:00.374) 0:08:24.766 ******** 2026-04-11 03:00:49.167986 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.167994 | orchestrator | 2026-04-11 03:00:49.168001 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-11 03:00:49.168008 | orchestrator | Saturday 11 April 2026 03:00:43 +0000 (0:00:00.250) 0:08:25.017 ******** 2026-04-11 03:00:49.168016 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168023 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.168030 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:49.168037 | orchestrator | 2026-04-11 03:00:49.168045 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-11 03:00:49.168063 | orchestrator | Saturday 11 April 2026 03:00:43 +0000 (0:00:00.673) 0:08:25.690 ******** 2026-04-11 03:00:49.168071 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168078 | orchestrator | 2026-04-11 03:00:49.168085 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-11 03:00:49.168092 | orchestrator | Saturday 11 April 2026 03:00:44 +0000 (0:00:00.314) 0:08:26.004 ******** 2026-04-11 03:00:49.168100 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168107 | orchestrator | 2026-04-11 03:00:49.168114 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-11 03:00:49.168121 | orchestrator | Saturday 11 April 2026 03:00:44 +0000 (0:00:00.262) 0:08:26.267 ******** 2026-04-11 03:00:49.168129 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168136 | orchestrator | 2026-04-11 03:00:49.168143 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-11 03:00:49.168151 | orchestrator | Saturday 11 April 2026 03:00:44 +0000 (0:00:00.151) 0:08:26.419 ******** 2026-04-11 03:00:49.168158 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168165 | orchestrator | 2026-04-11 03:00:49.168173 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-11 03:00:49.168180 | orchestrator | Saturday 11 April 2026 03:00:44 +0000 (0:00:00.268) 0:08:26.687 ******** 2026-04-11 03:00:49.168187 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168195 | orchestrator | 2026-04-11 03:00:49.168202 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-11 03:00:49.168214 | orchestrator | Saturday 11 April 2026 03:00:45 +0000 (0:00:00.259) 0:08:26.947 ******** 2026-04-11 03:00:49.168226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 03:00:49.168246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 03:00:49.168259 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 03:00:49.168271 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168283 | orchestrator | 2026-04-11 03:00:49.168296 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-11 03:00:49.168309 | orchestrator | Saturday 11 April 2026 03:00:45 +0000 (0:00:00.473) 0:08:27.420 ******** 2026-04-11 03:00:49.168320 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168350 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:00:49.168363 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:00:49.168373 | orchestrator | 2026-04-11 03:00:49.168385 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-11 03:00:49.168396 | orchestrator | Saturday 11 April 2026 03:00:45 +0000 (0:00:00.360) 0:08:27.780 ******** 2026-04-11 03:00:49.168406 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168417 | orchestrator | 2026-04-11 03:00:49.168430 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-11 03:00:49.168441 | orchestrator | Saturday 11 April 2026 03:00:46 +0000 (0:00:00.328) 0:08:28.108 ******** 2026-04-11 03:00:49.168453 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:00:49.168466 | orchestrator | 2026-04-11 03:00:49.168477 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-11 03:00:49.168491 | orchestrator | 2026-04-11 03:00:49.168502 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 03:00:49.168516 | orchestrator | Saturday 11 April 2026 03:00:47 +0000 (0:00:01.441) 0:08:29.550 ******** 2026-04-11 03:00:49.168530 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:00:49.168543 | orchestrator | 2026-04-11 03:00:49.168557 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 03:00:49.168573 | orchestrator | Saturday 11 April 2026 03:00:49 +0000 (0:00:01.397) 0:08:30.947 ******** 2026-04-11 03:01:17.895105 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:01:17.895243 | orchestrator | 2026-04-11 03:01:17.895263 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 03:01:17.895277 | orchestrator | Saturday 11 April 2026 03:00:50 +0000 (0:00:01.478) 0:08:32.426 ******** 2026-04-11 03:01:17.895289 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.895302 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.895313 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.895325 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.895337 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:17.895348 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:17.895396 | orchestrator | 2026-04-11 03:01:17.895407 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 03:01:17.895417 | orchestrator | Saturday 11 April 2026 03:00:52 +0000 (0:00:01.432) 0:08:33.858 ******** 2026-04-11 03:01:17.895428 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.895439 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.895451 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.895462 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.895474 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.895485 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.895496 | orchestrator | 2026-04-11 03:01:17.895507 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 03:01:17.895517 | orchestrator | Saturday 11 April 2026 03:00:52 +0000 (0:00:00.783) 0:08:34.642 ******** 2026-04-11 03:01:17.895529 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.895541 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.895552 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.895563 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.895575 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.895586 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.895597 | orchestrator | 2026-04-11 03:01:17.895608 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 03:01:17.895619 | orchestrator | Saturday 11 April 2026 03:00:53 +0000 (0:00:01.021) 0:08:35.664 ******** 2026-04-11 03:01:17.895629 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.895639 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.895665 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.895676 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.895687 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.895698 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.895708 | orchestrator | 2026-04-11 03:01:17.895719 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 03:01:17.895730 | orchestrator | Saturday 11 April 2026 03:00:54 +0000 (0:00:00.765) 0:08:36.429 ******** 2026-04-11 03:01:17.895740 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.895751 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.895763 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.895776 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.895786 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:17.895797 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:17.895809 | orchestrator | 2026-04-11 03:01:17.895820 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 03:01:17.895831 | orchestrator | Saturday 11 April 2026 03:00:56 +0000 (0:00:01.433) 0:08:37.863 ******** 2026-04-11 03:01:17.895842 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.895853 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.895865 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.895876 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.895889 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.895901 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.895912 | orchestrator | 2026-04-11 03:01:17.895923 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 03:01:17.895947 | orchestrator | Saturday 11 April 2026 03:00:56 +0000 (0:00:00.731) 0:08:38.595 ******** 2026-04-11 03:01:17.895960 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.895972 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.895984 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.895996 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.896008 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.896021 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.896032 | orchestrator | 2026-04-11 03:01:17.896043 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 03:01:17.896054 | orchestrator | Saturday 11 April 2026 03:00:57 +0000 (0:00:00.975) 0:08:39.571 ******** 2026-04-11 03:01:17.896065 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.896077 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.896088 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.896099 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.896111 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:17.896122 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:17.896133 | orchestrator | 2026-04-11 03:01:17.896144 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 03:01:17.896156 | orchestrator | Saturday 11 April 2026 03:00:58 +0000 (0:00:01.133) 0:08:40.704 ******** 2026-04-11 03:01:17.896167 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.896178 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.896190 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.896200 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.896211 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:17.896222 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:17.896231 | orchestrator | 2026-04-11 03:01:17.896242 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 03:01:17.896253 | orchestrator | Saturday 11 April 2026 03:01:00 +0000 (0:00:01.424) 0:08:42.129 ******** 2026-04-11 03:01:17.896265 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.896276 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.896288 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.896299 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.896311 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.896322 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.896334 | orchestrator | 2026-04-11 03:01:17.896345 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 03:01:17.896405 | orchestrator | Saturday 11 April 2026 03:01:01 +0000 (0:00:00.716) 0:08:42.845 ******** 2026-04-11 03:01:17.896442 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.896455 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.896467 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.896480 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.896492 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:17.896504 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:17.896516 | orchestrator | 2026-04-11 03:01:17.896529 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 03:01:17.896541 | orchestrator | Saturday 11 April 2026 03:01:02 +0000 (0:00:00.989) 0:08:43.834 ******** 2026-04-11 03:01:17.896563 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.896576 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.896588 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.896600 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.896611 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.896623 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.896636 | orchestrator | 2026-04-11 03:01:17.896649 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 03:01:17.896661 | orchestrator | Saturday 11 April 2026 03:01:02 +0000 (0:00:00.678) 0:08:44.513 ******** 2026-04-11 03:01:17.896673 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.896685 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.896697 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.896719 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.896730 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.896742 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.896753 | orchestrator | 2026-04-11 03:01:17.896766 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 03:01:17.896779 | orchestrator | Saturday 11 April 2026 03:01:03 +0000 (0:00:00.950) 0:08:45.463 ******** 2026-04-11 03:01:17.896791 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.896802 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.896815 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.896827 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.896839 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.896851 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.896864 | orchestrator | 2026-04-11 03:01:17.896876 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 03:01:17.896888 | orchestrator | Saturday 11 April 2026 03:01:04 +0000 (0:00:00.651) 0:08:46.115 ******** 2026-04-11 03:01:17.896898 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.896909 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.896919 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.896930 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.896941 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.896953 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.896966 | orchestrator | 2026-04-11 03:01:17.896978 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 03:01:17.896991 | orchestrator | Saturday 11 April 2026 03:01:05 +0000 (0:00:01.042) 0:08:47.157 ******** 2026-04-11 03:01:17.897004 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.897016 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.897028 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.897040 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:01:17.897052 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:01:17.897063 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:01:17.897075 | orchestrator | 2026-04-11 03:01:17.897088 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 03:01:17.897100 | orchestrator | Saturday 11 April 2026 03:01:06 +0000 (0:00:00.643) 0:08:47.801 ******** 2026-04-11 03:01:17.897112 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:17.897125 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:17.897137 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:17.897148 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.897159 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:17.897170 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:17.897182 | orchestrator | 2026-04-11 03:01:17.897194 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 03:01:17.897317 | orchestrator | Saturday 11 April 2026 03:01:07 +0000 (0:00:01.045) 0:08:48.846 ******** 2026-04-11 03:01:17.897342 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.897378 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.897391 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.897402 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.897414 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:17.897425 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:17.897436 | orchestrator | 2026-04-11 03:01:17.897447 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 03:01:17.897458 | orchestrator | Saturday 11 April 2026 03:01:07 +0000 (0:00:00.770) 0:08:49.617 ******** 2026-04-11 03:01:17.897467 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:17.897478 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:17.897488 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:17.897499 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.897510 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:17.897522 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:17.897532 | orchestrator | 2026-04-11 03:01:17.897544 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-11 03:01:17.897565 | orchestrator | Saturday 11 April 2026 03:01:09 +0000 (0:00:01.538) 0:08:51.156 ******** 2026-04-11 03:01:17.897578 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:01:17.897589 | orchestrator | 2026-04-11 03:01:17.897601 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-11 03:01:17.897613 | orchestrator | Saturday 11 April 2026 03:01:13 +0000 (0:00:04.210) 0:08:55.367 ******** 2026-04-11 03:01:17.897624 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:01:17.897636 | orchestrator | 2026-04-11 03:01:17.897646 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-11 03:01:17.897655 | orchestrator | Saturday 11 April 2026 03:01:16 +0000 (0:00:02.742) 0:08:58.110 ******** 2026-04-11 03:01:17.897667 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:01:17.897678 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:01:17.897689 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:01:17.897701 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:17.897712 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:01:17.897723 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:01:17.897734 | orchestrator | 2026-04-11 03:01:17.897759 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-11 03:01:43.345041 | orchestrator | Saturday 11 April 2026 03:01:17 +0000 (0:00:01.562) 0:08:59.672 ******** 2026-04-11 03:01:43.345134 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:01:43.345145 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:01:43.345153 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:01:43.345160 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:01:43.345166 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:01:43.345174 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:01:43.345180 | orchestrator | 2026-04-11 03:01:43.345188 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-11 03:01:43.345196 | orchestrator | Saturday 11 April 2026 03:01:19 +0000 (0:00:01.305) 0:09:00.978 ******** 2026-04-11 03:01:43.345203 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:01:43.345211 | orchestrator | 2026-04-11 03:01:43.345218 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-11 03:01:43.345225 | orchestrator | Saturday 11 April 2026 03:01:20 +0000 (0:00:01.380) 0:09:02.359 ******** 2026-04-11 03:01:43.345232 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:01:43.345238 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:01:43.345245 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:01:43.345252 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:01:43.345271 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:01:43.345287 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:01:43.345294 | orchestrator | 2026-04-11 03:01:43.345301 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-11 03:01:43.345308 | orchestrator | Saturday 11 April 2026 03:01:22 +0000 (0:00:01.636) 0:09:03.995 ******** 2026-04-11 03:01:43.345315 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:01:43.345322 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:01:43.345328 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:01:43.345335 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:01:43.345342 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:01:43.345348 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:01:43.345355 | orchestrator | 2026-04-11 03:01:43.345362 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-11 03:01:43.345428 | orchestrator | Saturday 11 April 2026 03:01:26 +0000 (0:00:04.010) 0:09:08.006 ******** 2026-04-11 03:01:43.345438 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:01:43.345465 | orchestrator | 2026-04-11 03:01:43.345472 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-11 03:01:43.345479 | orchestrator | Saturday 11 April 2026 03:01:27 +0000 (0:00:01.486) 0:09:09.492 ******** 2026-04-11 03:01:43.345486 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.345493 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.345500 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.345507 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:43.345513 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:43.345520 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:43.345527 | orchestrator | 2026-04-11 03:01:43.345533 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-11 03:01:43.345540 | orchestrator | Saturday 11 April 2026 03:01:28 +0000 (0:00:00.778) 0:09:10.271 ******** 2026-04-11 03:01:43.345547 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:01:43.345553 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:01:43.345560 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:01:43.345567 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:01:43.345573 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:01:43.345580 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:01:43.345586 | orchestrator | 2026-04-11 03:01:43.345593 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-11 03:01:43.345600 | orchestrator | Saturday 11 April 2026 03:01:31 +0000 (0:00:02.552) 0:09:12.823 ******** 2026-04-11 03:01:43.345606 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.345613 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.345620 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.345626 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:01:43.345633 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:01:43.345639 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:01:43.345646 | orchestrator | 2026-04-11 03:01:43.345653 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-11 03:01:43.345660 | orchestrator | 2026-04-11 03:01:43.345667 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 03:01:43.345674 | orchestrator | Saturday 11 April 2026 03:01:32 +0000 (0:00:01.021) 0:09:13.844 ******** 2026-04-11 03:01:43.345681 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:01:43.345688 | orchestrator | 2026-04-11 03:01:43.345695 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 03:01:43.345702 | orchestrator | Saturday 11 April 2026 03:01:32 +0000 (0:00:00.896) 0:09:14.741 ******** 2026-04-11 03:01:43.345708 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:01:43.345715 | orchestrator | 2026-04-11 03:01:43.345722 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 03:01:43.345729 | orchestrator | Saturday 11 April 2026 03:01:33 +0000 (0:00:00.589) 0:09:15.330 ******** 2026-04-11 03:01:43.345735 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.345742 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.345749 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.345755 | orchestrator | 2026-04-11 03:01:43.345762 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 03:01:43.345769 | orchestrator | Saturday 11 April 2026 03:01:34 +0000 (0:00:00.654) 0:09:15.985 ******** 2026-04-11 03:01:43.345776 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.345783 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.345789 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.345796 | orchestrator | 2026-04-11 03:01:43.345816 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 03:01:43.345823 | orchestrator | Saturday 11 April 2026 03:01:34 +0000 (0:00:00.752) 0:09:16.737 ******** 2026-04-11 03:01:43.345830 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.345837 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.345849 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.345856 | orchestrator | 2026-04-11 03:01:43.345863 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 03:01:43.345870 | orchestrator | Saturday 11 April 2026 03:01:35 +0000 (0:00:00.809) 0:09:17.547 ******** 2026-04-11 03:01:43.345876 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.345883 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.345893 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.345905 | orchestrator | 2026-04-11 03:01:43.345917 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 03:01:43.345927 | orchestrator | Saturday 11 April 2026 03:01:36 +0000 (0:00:01.090) 0:09:18.637 ******** 2026-04-11 03:01:43.345939 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.345950 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.345961 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.345973 | orchestrator | 2026-04-11 03:01:43.345985 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 03:01:43.345997 | orchestrator | Saturday 11 April 2026 03:01:37 +0000 (0:00:00.355) 0:09:18.993 ******** 2026-04-11 03:01:43.346009 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.346056 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.346063 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.346070 | orchestrator | 2026-04-11 03:01:43.346077 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 03:01:43.346084 | orchestrator | Saturday 11 April 2026 03:01:37 +0000 (0:00:00.393) 0:09:19.386 ******** 2026-04-11 03:01:43.346091 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.346097 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.346104 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.346111 | orchestrator | 2026-04-11 03:01:43.346117 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 03:01:43.346124 | orchestrator | Saturday 11 April 2026 03:01:37 +0000 (0:00:00.345) 0:09:19.732 ******** 2026-04-11 03:01:43.346131 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.346137 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.346149 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.346156 | orchestrator | 2026-04-11 03:01:43.346163 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 03:01:43.346169 | orchestrator | Saturday 11 April 2026 03:01:39 +0000 (0:00:01.066) 0:09:20.798 ******** 2026-04-11 03:01:43.346176 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.346183 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.346189 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.346196 | orchestrator | 2026-04-11 03:01:43.346202 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 03:01:43.346209 | orchestrator | Saturday 11 April 2026 03:01:39 +0000 (0:00:00.798) 0:09:21.596 ******** 2026-04-11 03:01:43.346216 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.346222 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.346229 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.346236 | orchestrator | 2026-04-11 03:01:43.346242 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 03:01:43.346249 | orchestrator | Saturday 11 April 2026 03:01:40 +0000 (0:00:00.351) 0:09:21.948 ******** 2026-04-11 03:01:43.346256 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.346262 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.346269 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.346276 | orchestrator | 2026-04-11 03:01:43.346282 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 03:01:43.346289 | orchestrator | Saturday 11 April 2026 03:01:40 +0000 (0:00:00.334) 0:09:22.283 ******** 2026-04-11 03:01:43.346296 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.346302 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.346309 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.346316 | orchestrator | 2026-04-11 03:01:43.346322 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 03:01:43.346334 | orchestrator | Saturday 11 April 2026 03:01:41 +0000 (0:00:00.657) 0:09:22.941 ******** 2026-04-11 03:01:43.346341 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.346347 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.346354 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.346361 | orchestrator | 2026-04-11 03:01:43.346368 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 03:01:43.346395 | orchestrator | Saturday 11 April 2026 03:01:41 +0000 (0:00:00.419) 0:09:23.360 ******** 2026-04-11 03:01:43.346403 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:01:43.346414 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:01:43.346424 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:01:43.346435 | orchestrator | 2026-04-11 03:01:43.346447 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 03:01:43.346458 | orchestrator | Saturday 11 April 2026 03:01:41 +0000 (0:00:00.382) 0:09:23.743 ******** 2026-04-11 03:01:43.346469 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.346481 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.346488 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.346494 | orchestrator | 2026-04-11 03:01:43.346501 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 03:01:43.346508 | orchestrator | Saturday 11 April 2026 03:01:42 +0000 (0:00:00.348) 0:09:24.091 ******** 2026-04-11 03:01:43.346514 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.346521 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.346528 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.346534 | orchestrator | 2026-04-11 03:01:43.346541 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 03:01:43.346547 | orchestrator | Saturday 11 April 2026 03:01:42 +0000 (0:00:00.654) 0:09:24.746 ******** 2026-04-11 03:01:43.346554 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:01:43.346561 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:01:43.346567 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:01:43.346574 | orchestrator | 2026-04-11 03:01:43.346581 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 03:01:43.346594 | orchestrator | Saturday 11 April 2026 03:01:43 +0000 (0:00:00.376) 0:09:25.123 ******** 2026-04-11 03:02:23.344748 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:23.344861 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:23.344876 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:23.344888 | orchestrator | 2026-04-11 03:02:23.344901 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 03:02:23.344914 | orchestrator | Saturday 11 April 2026 03:01:43 +0000 (0:00:00.380) 0:09:25.503 ******** 2026-04-11 03:02:23.344925 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:23.344936 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:23.344947 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:23.344958 | orchestrator | 2026-04-11 03:02:23.344969 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-11 03:02:23.344980 | orchestrator | Saturday 11 April 2026 03:01:44 +0000 (0:00:00.903) 0:09:26.406 ******** 2026-04-11 03:02:23.344992 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:23.345003 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:23.345014 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-11 03:02:23.345026 | orchestrator | 2026-04-11 03:02:23.345037 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-11 03:02:23.345048 | orchestrator | Saturday 11 April 2026 03:01:45 +0000 (0:00:00.471) 0:09:26.878 ******** 2026-04-11 03:02:23.345060 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:02:23.345071 | orchestrator | 2026-04-11 03:02:23.345082 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-11 03:02:23.345092 | orchestrator | Saturday 11 April 2026 03:01:47 +0000 (0:00:02.172) 0:09:29.051 ******** 2026-04-11 03:02:23.345131 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-11 03:02:23.345146 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:23.345158 | orchestrator | 2026-04-11 03:02:23.345169 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-11 03:02:23.345194 | orchestrator | Saturday 11 April 2026 03:01:47 +0000 (0:00:00.235) 0:09:29.287 ******** 2026-04-11 03:02:23.345209 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 03:02:23.345230 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 03:02:23.345242 | orchestrator | 2026-04-11 03:02:23.345253 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-11 03:02:23.345264 | orchestrator | Saturday 11 April 2026 03:01:56 +0000 (0:00:08.560) 0:09:37.847 ******** 2026-04-11 03:02:23.345275 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:02:23.345288 | orchestrator | 2026-04-11 03:02:23.345302 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-11 03:02:23.345315 | orchestrator | Saturday 11 April 2026 03:01:59 +0000 (0:00:03.711) 0:09:41.559 ******** 2026-04-11 03:02:23.345327 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:02:23.345341 | orchestrator | 2026-04-11 03:02:23.345354 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-11 03:02:23.345366 | orchestrator | Saturday 11 April 2026 03:02:00 +0000 (0:00:00.926) 0:09:42.486 ******** 2026-04-11 03:02:23.345380 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 03:02:23.345393 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 03:02:23.345443 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 03:02:23.345456 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-11 03:02:23.345469 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-11 03:02:23.345482 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-11 03:02:23.345494 | orchestrator | 2026-04-11 03:02:23.345508 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-11 03:02:23.345534 | orchestrator | Saturday 11 April 2026 03:02:01 +0000 (0:00:01.162) 0:09:43.648 ******** 2026-04-11 03:02:23.345547 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:02:23.345572 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 03:02:23.345585 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 03:02:23.345598 | orchestrator | 2026-04-11 03:02:23.345611 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-11 03:02:23.345624 | orchestrator | Saturday 11 April 2026 03:02:04 +0000 (0:00:02.265) 0:09:45.913 ******** 2026-04-11 03:02:23.345638 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 03:02:23.345651 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 03:02:23.345663 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:23.345674 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 03:02:23.345685 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 03:02:23.345697 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:23.345717 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 03:02:23.345746 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 03:02:23.345758 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:23.345769 | orchestrator | 2026-04-11 03:02:23.345780 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-11 03:02:23.345791 | orchestrator | Saturday 11 April 2026 03:02:05 +0000 (0:00:01.242) 0:09:47.156 ******** 2026-04-11 03:02:23.345802 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:23.345813 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:23.345824 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:23.345835 | orchestrator | 2026-04-11 03:02:23.345846 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-11 03:02:23.345857 | orchestrator | Saturday 11 April 2026 03:02:08 +0000 (0:00:03.249) 0:09:50.405 ******** 2026-04-11 03:02:23.345868 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:23.345879 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:23.345890 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:23.345900 | orchestrator | 2026-04-11 03:02:23.345912 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-11 03:02:23.345923 | orchestrator | Saturday 11 April 2026 03:02:08 +0000 (0:00:00.362) 0:09:50.767 ******** 2026-04-11 03:02:23.345934 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:02:23.345945 | orchestrator | 2026-04-11 03:02:23.345956 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-11 03:02:23.345967 | orchestrator | Saturday 11 April 2026 03:02:09 +0000 (0:00:00.925) 0:09:51.693 ******** 2026-04-11 03:02:23.345978 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:02:23.345989 | orchestrator | 2026-04-11 03:02:23.345999 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-11 03:02:23.346011 | orchestrator | Saturday 11 April 2026 03:02:10 +0000 (0:00:00.671) 0:09:52.364 ******** 2026-04-11 03:02:23.346090 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:23.346102 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:23.346112 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:23.346123 | orchestrator | 2026-04-11 03:02:23.346141 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-11 03:02:23.346152 | orchestrator | Saturday 11 April 2026 03:02:11 +0000 (0:00:01.353) 0:09:53.718 ******** 2026-04-11 03:02:23.346163 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:23.346174 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:23.346185 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:23.346195 | orchestrator | 2026-04-11 03:02:23.346206 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-11 03:02:23.346218 | orchestrator | Saturday 11 April 2026 03:02:13 +0000 (0:00:01.588) 0:09:55.306 ******** 2026-04-11 03:02:23.346239 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:23.346263 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:23.346290 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:23.346309 | orchestrator | 2026-04-11 03:02:23.346328 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-11 03:02:23.346346 | orchestrator | Saturday 11 April 2026 03:02:15 +0000 (0:00:01.868) 0:09:57.174 ******** 2026-04-11 03:02:23.346366 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:23.346385 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:23.346433 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:23.346452 | orchestrator | 2026-04-11 03:02:23.346470 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-11 03:02:23.346488 | orchestrator | Saturday 11 April 2026 03:02:17 +0000 (0:00:02.042) 0:09:59.217 ******** 2026-04-11 03:02:23.346506 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:23.346524 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:23.346558 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:23.346576 | orchestrator | 2026-04-11 03:02:23.346594 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 03:02:23.346614 | orchestrator | Saturday 11 April 2026 03:02:19 +0000 (0:00:01.592) 0:10:00.810 ******** 2026-04-11 03:02:23.346633 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:23.346651 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:23.346669 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:23.346686 | orchestrator | 2026-04-11 03:02:23.346698 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 03:02:23.346709 | orchestrator | Saturday 11 April 2026 03:02:19 +0000 (0:00:00.773) 0:10:01.583 ******** 2026-04-11 03:02:23.346723 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:02:23.346741 | orchestrator | 2026-04-11 03:02:23.346763 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-11 03:02:23.346789 | orchestrator | Saturday 11 April 2026 03:02:20 +0000 (0:00:00.918) 0:10:02.501 ******** 2026-04-11 03:02:23.346805 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:23.346823 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:23.346841 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:23.346859 | orchestrator | 2026-04-11 03:02:23.346876 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-11 03:02:23.346890 | orchestrator | Saturday 11 April 2026 03:02:21 +0000 (0:00:00.369) 0:10:02.871 ******** 2026-04-11 03:02:23.346906 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:23.346924 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:23.346941 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:23.346958 | orchestrator | 2026-04-11 03:02:23.346976 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-11 03:02:23.346993 | orchestrator | Saturday 11 April 2026 03:02:22 +0000 (0:00:01.254) 0:10:04.125 ******** 2026-04-11 03:02:23.347013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 03:02:23.347032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 03:02:23.347051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 03:02:23.347068 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:23.347079 | orchestrator | 2026-04-11 03:02:23.347107 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-11 03:02:43.568750 | orchestrator | Saturday 11 April 2026 03:02:23 +0000 (0:00:00.999) 0:10:05.124 ******** 2026-04-11 03:02:43.568824 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.568832 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.568836 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.568841 | orchestrator | 2026-04-11 03:02:43.568846 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-11 03:02:43.568850 | orchestrator | 2026-04-11 03:02:43.568855 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 03:02:43.568859 | orchestrator | Saturday 11 April 2026 03:02:24 +0000 (0:00:00.944) 0:10:06.069 ******** 2026-04-11 03:02:43.568864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:02:43.568869 | orchestrator | 2026-04-11 03:02:43.568873 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 03:02:43.568878 | orchestrator | Saturday 11 April 2026 03:02:24 +0000 (0:00:00.595) 0:10:06.664 ******** 2026-04-11 03:02:43.568882 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:02:43.568886 | orchestrator | 2026-04-11 03:02:43.568890 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 03:02:43.568894 | orchestrator | Saturday 11 April 2026 03:02:25 +0000 (0:00:00.927) 0:10:07.592 ******** 2026-04-11 03:02:43.568898 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.568923 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.568928 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.568932 | orchestrator | 2026-04-11 03:02:43.568936 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 03:02:43.568940 | orchestrator | Saturday 11 April 2026 03:02:26 +0000 (0:00:00.363) 0:10:07.956 ******** 2026-04-11 03:02:43.568944 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.568948 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.568953 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.568957 | orchestrator | 2026-04-11 03:02:43.568961 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 03:02:43.568975 | orchestrator | Saturday 11 April 2026 03:02:26 +0000 (0:00:00.745) 0:10:08.701 ******** 2026-04-11 03:02:43.568980 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.568986 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.568993 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569002 | orchestrator | 2026-04-11 03:02:43.569012 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 03:02:43.569018 | orchestrator | Saturday 11 April 2026 03:02:27 +0000 (0:00:01.071) 0:10:09.773 ******** 2026-04-11 03:02:43.569025 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.569032 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.569038 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569044 | orchestrator | 2026-04-11 03:02:43.569051 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 03:02:43.569058 | orchestrator | Saturday 11 April 2026 03:02:28 +0000 (0:00:00.769) 0:10:10.543 ******** 2026-04-11 03:02:43.569064 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569070 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569075 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569081 | orchestrator | 2026-04-11 03:02:43.569088 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 03:02:43.569095 | orchestrator | Saturday 11 April 2026 03:02:29 +0000 (0:00:00.410) 0:10:10.953 ******** 2026-04-11 03:02:43.569102 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569109 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569116 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569122 | orchestrator | 2026-04-11 03:02:43.569129 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 03:02:43.569135 | orchestrator | Saturday 11 April 2026 03:02:29 +0000 (0:00:00.375) 0:10:11.329 ******** 2026-04-11 03:02:43.569142 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569150 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569156 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569163 | orchestrator | 2026-04-11 03:02:43.569169 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 03:02:43.569176 | orchestrator | Saturday 11 April 2026 03:02:30 +0000 (0:00:00.649) 0:10:11.979 ******** 2026-04-11 03:02:43.569182 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.569189 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.569196 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569203 | orchestrator | 2026-04-11 03:02:43.569209 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 03:02:43.569216 | orchestrator | Saturday 11 April 2026 03:02:30 +0000 (0:00:00.804) 0:10:12.783 ******** 2026-04-11 03:02:43.569223 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.569230 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.569237 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569243 | orchestrator | 2026-04-11 03:02:43.569247 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 03:02:43.569252 | orchestrator | Saturday 11 April 2026 03:02:31 +0000 (0:00:00.783) 0:10:13.566 ******** 2026-04-11 03:02:43.569256 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569260 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569264 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569276 | orchestrator | 2026-04-11 03:02:43.569281 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 03:02:43.569285 | orchestrator | Saturday 11 April 2026 03:02:32 +0000 (0:00:00.356) 0:10:13.922 ******** 2026-04-11 03:02:43.569289 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569293 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569297 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569302 | orchestrator | 2026-04-11 03:02:43.569306 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 03:02:43.569311 | orchestrator | Saturday 11 April 2026 03:02:32 +0000 (0:00:00.621) 0:10:14.544 ******** 2026-04-11 03:02:43.569316 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.569320 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.569325 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569330 | orchestrator | 2026-04-11 03:02:43.569348 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 03:02:43.569353 | orchestrator | Saturday 11 April 2026 03:02:33 +0000 (0:00:00.367) 0:10:14.911 ******** 2026-04-11 03:02:43.569358 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.569362 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.569368 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569372 | orchestrator | 2026-04-11 03:02:43.569377 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 03:02:43.569382 | orchestrator | Saturday 11 April 2026 03:02:33 +0000 (0:00:00.399) 0:10:15.311 ******** 2026-04-11 03:02:43.569387 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.569392 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.569396 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569401 | orchestrator | 2026-04-11 03:02:43.569406 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 03:02:43.569411 | orchestrator | Saturday 11 April 2026 03:02:33 +0000 (0:00:00.371) 0:10:15.683 ******** 2026-04-11 03:02:43.569416 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569440 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569445 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569450 | orchestrator | 2026-04-11 03:02:43.569454 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 03:02:43.569459 | orchestrator | Saturday 11 April 2026 03:02:34 +0000 (0:00:00.660) 0:10:16.343 ******** 2026-04-11 03:02:43.569464 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569469 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569473 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569478 | orchestrator | 2026-04-11 03:02:43.569483 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 03:02:43.569487 | orchestrator | Saturday 11 April 2026 03:02:34 +0000 (0:00:00.374) 0:10:16.718 ******** 2026-04-11 03:02:43.569492 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569497 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569502 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569506 | orchestrator | 2026-04-11 03:02:43.569511 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 03:02:43.569522 | orchestrator | Saturday 11 April 2026 03:02:35 +0000 (0:00:00.358) 0:10:17.076 ******** 2026-04-11 03:02:43.569527 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.569531 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.569536 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569541 | orchestrator | 2026-04-11 03:02:43.569546 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 03:02:43.569550 | orchestrator | Saturday 11 April 2026 03:02:35 +0000 (0:00:00.369) 0:10:17.446 ******** 2026-04-11 03:02:43.569555 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:02:43.569560 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:02:43.569567 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:02:43.569575 | orchestrator | 2026-04-11 03:02:43.569586 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-11 03:02:43.569599 | orchestrator | Saturday 11 April 2026 03:02:36 +0000 (0:00:00.974) 0:10:18.420 ******** 2026-04-11 03:02:43.569607 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:02:43.569614 | orchestrator | 2026-04-11 03:02:43.569621 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 03:02:43.569628 | orchestrator | Saturday 11 April 2026 03:02:37 +0000 (0:00:00.651) 0:10:19.071 ******** 2026-04-11 03:02:43.569634 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:02:43.569642 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 03:02:43.569649 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 03:02:43.569656 | orchestrator | 2026-04-11 03:02:43.569663 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 03:02:43.569670 | orchestrator | Saturday 11 April 2026 03:02:39 +0000 (0:00:02.554) 0:10:21.626 ******** 2026-04-11 03:02:43.569678 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 03:02:43.569685 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 03:02:43.569693 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:02:43.569698 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 03:02:43.569702 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 03:02:43.569706 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:02:43.569710 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 03:02:43.569714 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 03:02:43.569718 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:02:43.569722 | orchestrator | 2026-04-11 03:02:43.569726 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-11 03:02:43.569730 | orchestrator | Saturday 11 April 2026 03:02:41 +0000 (0:00:01.573) 0:10:23.199 ******** 2026-04-11 03:02:43.569734 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:02:43.569738 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:02:43.569743 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:02:43.569747 | orchestrator | 2026-04-11 03:02:43.569751 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-11 03:02:43.569755 | orchestrator | Saturday 11 April 2026 03:02:41 +0000 (0:00:00.400) 0:10:23.600 ******** 2026-04-11 03:02:43.569759 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:02:43.569764 | orchestrator | 2026-04-11 03:02:43.569768 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-11 03:02:43.569772 | orchestrator | Saturday 11 April 2026 03:02:42 +0000 (0:00:00.851) 0:10:24.452 ******** 2026-04-11 03:02:43.569777 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 03:02:43.569788 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 03:03:37.012984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 03:03:37.013111 | orchestrator | 2026-04-11 03:03:37.013134 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-11 03:03:37.013151 | orchestrator | Saturday 11 April 2026 03:02:43 +0000 (0:00:00.897) 0:10:25.349 ******** 2026-04-11 03:03:37.013166 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:03:37.013183 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 03:03:37.013198 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:03:37.013243 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:03:37.013257 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 03:03:37.013272 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 03:03:37.013292 | orchestrator | 2026-04-11 03:03:37.013307 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 03:03:37.013321 | orchestrator | Saturday 11 April 2026 03:02:48 +0000 (0:00:04.951) 0:10:30.301 ******** 2026-04-11 03:03:37.013335 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:03:37.013349 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 03:03:37.013362 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:03:37.013390 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 03:03:37.013405 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:03:37.013419 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 03:03:37.013432 | orchestrator | 2026-04-11 03:03:37.013444 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 03:03:37.013483 | orchestrator | Saturday 11 April 2026 03:02:50 +0000 (0:00:02.442) 0:10:32.743 ******** 2026-04-11 03:03:37.013499 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 03:03:37.013512 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:03:37.013525 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 03:03:37.013539 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:03:37.013552 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 03:03:37.013566 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:03:37.013579 | orchestrator | 2026-04-11 03:03:37.013592 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-11 03:03:37.013605 | orchestrator | Saturday 11 April 2026 03:02:52 +0000 (0:00:01.576) 0:10:34.320 ******** 2026-04-11 03:03:37.013615 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-11 03:03:37.013623 | orchestrator | 2026-04-11 03:03:37.013631 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-11 03:03:37.013639 | orchestrator | Saturday 11 April 2026 03:02:52 +0000 (0:00:00.255) 0:10:34.576 ******** 2026-04-11 03:03:37.013647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013709 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:37.013723 | orchestrator | 2026-04-11 03:03:37.013736 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-11 03:03:37.013749 | orchestrator | Saturday 11 April 2026 03:02:53 +0000 (0:00:00.701) 0:10:35.278 ******** 2026-04-11 03:03:37.013763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 03:03:37.013833 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:37.013841 | orchestrator | 2026-04-11 03:03:37.013868 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-11 03:03:37.013876 | orchestrator | Saturday 11 April 2026 03:02:54 +0000 (0:00:00.667) 0:10:35.945 ******** 2026-04-11 03:03:37.013884 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 03:03:37.013893 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 03:03:37.013901 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 03:03:37.013909 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 03:03:37.013917 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 03:03:37.013925 | orchestrator | 2026-04-11 03:03:37.013933 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-11 03:03:37.013941 | orchestrator | Saturday 11 April 2026 03:03:25 +0000 (0:00:31.326) 0:11:07.272 ******** 2026-04-11 03:03:37.013949 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:37.013957 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:03:37.013965 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:03:37.013976 | orchestrator | 2026-04-11 03:03:37.013989 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-11 03:03:37.014006 | orchestrator | Saturday 11 April 2026 03:03:25 +0000 (0:00:00.354) 0:11:07.626 ******** 2026-04-11 03:03:37.014092 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:37.014108 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:03:37.014116 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:03:37.014124 | orchestrator | 2026-04-11 03:03:37.014132 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-11 03:03:37.014140 | orchestrator | Saturday 11 April 2026 03:03:26 +0000 (0:00:00.352) 0:11:07.979 ******** 2026-04-11 03:03:37.014148 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:03:37.014157 | orchestrator | 2026-04-11 03:03:37.014164 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-11 03:03:37.014172 | orchestrator | Saturday 11 April 2026 03:03:27 +0000 (0:00:00.991) 0:11:08.970 ******** 2026-04-11 03:03:37.014182 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:03:37.014195 | orchestrator | 2026-04-11 03:03:37.014209 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-11 03:03:37.014221 | orchestrator | Saturday 11 April 2026 03:03:28 +0000 (0:00:00.896) 0:11:09.867 ******** 2026-04-11 03:03:37.014234 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:03:37.014247 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:03:37.014260 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:03:37.014272 | orchestrator | 2026-04-11 03:03:37.014286 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-11 03:03:37.014312 | orchestrator | Saturday 11 April 2026 03:03:29 +0000 (0:00:01.394) 0:11:11.262 ******** 2026-04-11 03:03:37.014325 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:03:37.014337 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:03:37.014346 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:03:37.014359 | orchestrator | 2026-04-11 03:03:37.014370 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-11 03:03:37.014381 | orchestrator | Saturday 11 April 2026 03:03:30 +0000 (0:00:01.255) 0:11:12.517 ******** 2026-04-11 03:03:37.014392 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:03:37.014414 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:03:37.014427 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:03:37.014440 | orchestrator | 2026-04-11 03:03:37.014454 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-11 03:03:37.014489 | orchestrator | Saturday 11 April 2026 03:03:32 +0000 (0:00:01.950) 0:11:14.467 ******** 2026-04-11 03:03:37.014504 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 03:03:37.014517 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 03:03:37.014540 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 03:03:37.014553 | orchestrator | 2026-04-11 03:03:37.014566 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 03:03:37.014578 | orchestrator | Saturday 11 April 2026 03:03:35 +0000 (0:00:02.963) 0:11:17.431 ******** 2026-04-11 03:03:37.014591 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:37.014603 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:03:37.014617 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:03:37.014630 | orchestrator | 2026-04-11 03:03:37.014643 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 03:03:37.014656 | orchestrator | Saturday 11 April 2026 03:03:36 +0000 (0:00:00.381) 0:11:17.813 ******** 2026-04-11 03:03:37.014670 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:03:37.014678 | orchestrator | 2026-04-11 03:03:37.014698 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-11 03:03:39.993446 | orchestrator | Saturday 11 April 2026 03:03:36 +0000 (0:00:00.975) 0:11:18.788 ******** 2026-04-11 03:03:39.993625 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:39.993655 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:39.993675 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:39.993688 | orchestrator | 2026-04-11 03:03:39.993700 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-11 03:03:39.993711 | orchestrator | Saturday 11 April 2026 03:03:37 +0000 (0:00:00.414) 0:11:19.203 ******** 2026-04-11 03:03:39.993722 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:39.993734 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:03:39.993744 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:03:39.993755 | orchestrator | 2026-04-11 03:03:39.993766 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-11 03:03:39.993778 | orchestrator | Saturday 11 April 2026 03:03:37 +0000 (0:00:00.396) 0:11:19.599 ******** 2026-04-11 03:03:39.993788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 03:03:39.993800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 03:03:39.993810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 03:03:39.993821 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:39.993832 | orchestrator | 2026-04-11 03:03:39.993843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-11 03:03:39.993854 | orchestrator | Saturday 11 April 2026 03:03:38 +0000 (0:00:01.042) 0:11:20.641 ******** 2026-04-11 03:03:39.993895 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:39.993906 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:39.993917 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:39.993928 | orchestrator | 2026-04-11 03:03:39.993939 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:03:39.993950 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-11 03:03:39.993981 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-11 03:03:39.993995 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-11 03:03:39.994008 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-11 03:03:39.994079 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-11 03:03:39.994092 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-11 03:03:39.994105 | orchestrator | 2026-04-11 03:03:39.994117 | orchestrator | 2026-04-11 03:03:39.994130 | orchestrator | 2026-04-11 03:03:39.994142 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:03:39.994155 | orchestrator | Saturday 11 April 2026 03:03:39 +0000 (0:00:00.609) 0:11:21.251 ******** 2026-04-11 03:03:39.994168 | orchestrator | =============================================================================== 2026-04-11 03:03:39.994181 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 54.05s 2026-04-11 03:03:39.994194 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.89s 2026-04-11 03:03:39.994207 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.33s 2026-04-11 03:03:39.994219 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.18s 2026-04-11 03:03:39.994232 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.87s 2026-04-11 03:03:39.994244 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.02s 2026-04-11 03:03:39.994256 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.61s 2026-04-11 03:03:39.994269 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.82s 2026-04-11 03:03:39.994283 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.65s 2026-04-11 03:03:39.994296 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.56s 2026-04-11 03:03:39.994308 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.63s 2026-04-11 03:03:39.994321 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.54s 2026-04-11 03:03:39.994333 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.11s 2026-04-11 03:03:39.994345 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.95s 2026-04-11 03:03:39.994359 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.21s 2026-04-11 03:03:39.994371 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.01s 2026-04-11 03:03:39.994384 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.81s 2026-04-11 03:03:39.994396 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.71s 2026-04-11 03:03:39.994409 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.43s 2026-04-11 03:03:39.994422 | orchestrator | ceph-mds : Create mds keyring ------------------------------------------- 3.25s 2026-04-11 03:03:42.636865 | orchestrator | 2026-04-11 03:03:42 | INFO  | Task b1118b20-4c3c-4df3-8885-5410af9e4757 (ceph-pools) was prepared for execution. 2026-04-11 03:03:42.636990 | orchestrator | 2026-04-11 03:03:42 | INFO  | It takes a moment until task b1118b20-4c3c-4df3-8885-5410af9e4757 (ceph-pools) has been started and output is visible here. 2026-04-11 03:03:58.334192 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 03:03:58.334264 | orchestrator | 2.16.14 2026-04-11 03:03:58.334271 | orchestrator | 2026-04-11 03:03:58.334276 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-11 03:03:58.334281 | orchestrator | 2026-04-11 03:03:58.334285 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 03:03:58.334290 | orchestrator | Saturday 11 April 2026 03:03:47 +0000 (0:00:00.715) 0:00:00.716 ******** 2026-04-11 03:03:58.334294 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:03:58.334300 | orchestrator | 2026-04-11 03:03:58.334304 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 03:03:58.334308 | orchestrator | Saturday 11 April 2026 03:03:48 +0000 (0:00:00.725) 0:00:01.441 ******** 2026-04-11 03:03:58.334312 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:58.334316 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:58.334320 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:58.334324 | orchestrator | 2026-04-11 03:03:58.334328 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 03:03:58.334332 | orchestrator | Saturday 11 April 2026 03:03:49 +0000 (0:00:00.643) 0:00:02.085 ******** 2026-04-11 03:03:58.334336 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:58.334340 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:58.334344 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:58.334348 | orchestrator | 2026-04-11 03:03:58.334352 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 03:03:58.334367 | orchestrator | Saturday 11 April 2026 03:03:49 +0000 (0:00:00.315) 0:00:02.400 ******** 2026-04-11 03:03:58.334371 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:58.334375 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:58.334379 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:58.334383 | orchestrator | 2026-04-11 03:03:58.334387 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 03:03:58.334391 | orchestrator | Saturday 11 April 2026 03:03:50 +0000 (0:00:00.926) 0:00:03.327 ******** 2026-04-11 03:03:58.334395 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:58.334399 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:58.334403 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:58.334407 | orchestrator | 2026-04-11 03:03:58.334411 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 03:03:58.334415 | orchestrator | Saturday 11 April 2026 03:03:50 +0000 (0:00:00.358) 0:00:03.686 ******** 2026-04-11 03:03:58.334419 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:58.334423 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:58.334426 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:58.334430 | orchestrator | 2026-04-11 03:03:58.334434 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 03:03:58.334438 | orchestrator | Saturday 11 April 2026 03:03:51 +0000 (0:00:00.349) 0:00:04.036 ******** 2026-04-11 03:03:58.334442 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:58.334446 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:58.334450 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:58.334454 | orchestrator | 2026-04-11 03:03:58.334458 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 03:03:58.334462 | orchestrator | Saturday 11 April 2026 03:03:51 +0000 (0:00:00.389) 0:00:04.425 ******** 2026-04-11 03:03:58.334466 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:58.334471 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:03:58.334532 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:03:58.334537 | orchestrator | 2026-04-11 03:03:58.334541 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 03:03:58.334546 | orchestrator | Saturday 11 April 2026 03:03:52 +0000 (0:00:00.607) 0:00:05.033 ******** 2026-04-11 03:03:58.334550 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:58.334553 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:58.334557 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:58.334561 | orchestrator | 2026-04-11 03:03:58.334565 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 03:03:58.334569 | orchestrator | Saturday 11 April 2026 03:03:52 +0000 (0:00:00.371) 0:00:05.404 ******** 2026-04-11 03:03:58.334573 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 03:03:58.334577 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 03:03:58.334581 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 03:03:58.334585 | orchestrator | 2026-04-11 03:03:58.334589 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 03:03:58.334593 | orchestrator | Saturday 11 April 2026 03:03:53 +0000 (0:00:00.728) 0:00:06.133 ******** 2026-04-11 03:03:58.334597 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:03:58.334601 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:03:58.334605 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:03:58.334609 | orchestrator | 2026-04-11 03:03:58.334613 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 03:03:58.334617 | orchestrator | Saturday 11 April 2026 03:03:53 +0000 (0:00:00.484) 0:00:06.617 ******** 2026-04-11 03:03:58.334621 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 03:03:58.334625 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 03:03:58.334629 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 03:03:58.334633 | orchestrator | 2026-04-11 03:03:58.334637 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 03:03:58.334641 | orchestrator | Saturday 11 April 2026 03:03:56 +0000 (0:00:02.391) 0:00:09.009 ******** 2026-04-11 03:03:58.334645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 03:03:58.334650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 03:03:58.334654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 03:03:58.334658 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:58.334662 | orchestrator | 2026-04-11 03:03:58.334676 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 03:03:58.334680 | orchestrator | Saturday 11 April 2026 03:03:56 +0000 (0:00:00.706) 0:00:09.716 ******** 2026-04-11 03:03:58.334686 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 03:03:58.334693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 03:03:58.334697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 03:03:58.334701 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:58.334705 | orchestrator | 2026-04-11 03:03:58.334709 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 03:03:58.334713 | orchestrator | Saturday 11 April 2026 03:03:57 +0000 (0:00:01.182) 0:00:10.898 ******** 2026-04-11 03:03:58.334726 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 03:03:58.334732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 03:03:58.334736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 03:03:58.334740 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:03:58.334745 | orchestrator | 2026-04-11 03:03:58.334749 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 03:03:58.334752 | orchestrator | Saturday 11 April 2026 03:03:58 +0000 (0:00:00.176) 0:00:11.074 ******** 2026-04-11 03:03:58.334758 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1b0d6fe4ad27', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 03:03:54.670163', 'end': '2026-04-11 03:03:54.718605', 'delta': '0:00:00.048442', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1b0d6fe4ad27'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 03:03:58.334766 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1a56ecc96cb4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 03:03:55.258150', 'end': '2026-04-11 03:03:55.307083', 'delta': '0:00:00.048933', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1a56ecc96cb4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 03:03:58.334774 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f023dde40a6c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 03:03:55.831648', 'end': '2026-04-11 03:03:55.885562', 'delta': '0:00:00.053914', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f023dde40a6c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 03:04:05.899970 | orchestrator | 2026-04-11 03:04:05.900076 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 03:04:05.900091 | orchestrator | Saturday 11 April 2026 03:03:58 +0000 (0:00:00.217) 0:00:11.291 ******** 2026-04-11 03:04:05.900103 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:04:05.900116 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:04:05.900127 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:04:05.900138 | orchestrator | 2026-04-11 03:04:05.900149 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 03:04:05.900172 | orchestrator | Saturday 11 April 2026 03:03:58 +0000 (0:00:00.482) 0:00:11.773 ******** 2026-04-11 03:04:05.900203 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-11 03:04:05.900226 | orchestrator | 2026-04-11 03:04:05.900237 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 03:04:05.900248 | orchestrator | Saturday 11 April 2026 03:04:00 +0000 (0:00:01.760) 0:00:13.534 ******** 2026-04-11 03:04:05.900259 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900270 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900281 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900292 | orchestrator | 2026-04-11 03:04:05.900303 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 03:04:05.900314 | orchestrator | Saturday 11 April 2026 03:04:00 +0000 (0:00:00.337) 0:00:13.871 ******** 2026-04-11 03:04:05.900325 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900336 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900347 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900358 | orchestrator | 2026-04-11 03:04:05.900369 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 03:04:05.900380 | orchestrator | Saturday 11 April 2026 03:04:01 +0000 (0:00:00.949) 0:00:14.821 ******** 2026-04-11 03:04:05.900391 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900402 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900414 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900425 | orchestrator | 2026-04-11 03:04:05.900436 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 03:04:05.900447 | orchestrator | Saturday 11 April 2026 03:04:02 +0000 (0:00:00.351) 0:00:15.172 ******** 2026-04-11 03:04:05.900458 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:04:05.900469 | orchestrator | 2026-04-11 03:04:05.900509 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 03:04:05.900523 | orchestrator | Saturday 11 April 2026 03:04:02 +0000 (0:00:00.150) 0:00:15.323 ******** 2026-04-11 03:04:05.900536 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900549 | orchestrator | 2026-04-11 03:04:05.900563 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 03:04:05.900575 | orchestrator | Saturday 11 April 2026 03:04:02 +0000 (0:00:00.272) 0:00:15.595 ******** 2026-04-11 03:04:05.900588 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900600 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900614 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900627 | orchestrator | 2026-04-11 03:04:05.900640 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 03:04:05.900653 | orchestrator | Saturday 11 April 2026 03:04:02 +0000 (0:00:00.330) 0:00:15.926 ******** 2026-04-11 03:04:05.900667 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900679 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900691 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900704 | orchestrator | 2026-04-11 03:04:05.900715 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 03:04:05.900726 | orchestrator | Saturday 11 April 2026 03:04:03 +0000 (0:00:00.350) 0:00:16.276 ******** 2026-04-11 03:04:05.900737 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900748 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900758 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900769 | orchestrator | 2026-04-11 03:04:05.900788 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 03:04:05.900800 | orchestrator | Saturday 11 April 2026 03:04:03 +0000 (0:00:00.616) 0:00:16.893 ******** 2026-04-11 03:04:05.900811 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900822 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900833 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900844 | orchestrator | 2026-04-11 03:04:05.900856 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 03:04:05.900867 | orchestrator | Saturday 11 April 2026 03:04:04 +0000 (0:00:00.383) 0:00:17.277 ******** 2026-04-11 03:04:05.900878 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900889 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900900 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900911 | orchestrator | 2026-04-11 03:04:05.900922 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 03:04:05.900933 | orchestrator | Saturday 11 April 2026 03:04:04 +0000 (0:00:00.358) 0:00:17.635 ******** 2026-04-11 03:04:05.900944 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.900955 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.900966 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.900977 | orchestrator | 2026-04-11 03:04:05.900988 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 03:04:05.900999 | orchestrator | Saturday 11 April 2026 03:04:05 +0000 (0:00:00.619) 0:00:18.254 ******** 2026-04-11 03:04:05.901010 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:05.901021 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:05.901032 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:05.901043 | orchestrator | 2026-04-11 03:04:05.901054 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 03:04:05.901065 | orchestrator | Saturday 11 April 2026 03:04:05 +0000 (0:00:00.372) 0:00:18.627 ******** 2026-04-11 03:04:05.901092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:05.901222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.000300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.000426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.000441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.000474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.000572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.000583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.000599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.000609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.000617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.000625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.000633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.000647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.202196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.202323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.202383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.202411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.202450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.202473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.202571 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:06.202586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.202599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.202612 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:06.202624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.202637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.202649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.202669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.494322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.494581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.494618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.494638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.494655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.494672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-11 03:04:06.494736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.494777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.494800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.494820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.494842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-11 03:04:06.494864 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:06.494885 | orchestrator | 2026-04-11 03:04:06.494907 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 03:04:06.494922 | orchestrator | Saturday 11 April 2026 03:04:06 +0000 (0:00:00.680) 0:00:19.308 ******** 2026-04-11 03:04:06.494971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606345 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.606579 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711372 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711417 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711633 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.711742 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.858698 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.858803 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:06.858824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.858889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.858924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.858939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.858952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.858972 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:06.858989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.859002 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.859021 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979736 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979821 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979869 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979896 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.979994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:06.980022 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:18.283302 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:18.283412 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-11-01-39-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-11 03:04:18.283443 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:18.283453 | orchestrator | 2026-04-11 03:04:18.283462 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 03:04:18.283471 | orchestrator | Saturday 11 April 2026 03:04:07 +0000 (0:00:00.749) 0:00:20.057 ******** 2026-04-11 03:04:18.283478 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:04:18.283486 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:04:18.283545 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:04:18.283557 | orchestrator | 2026-04-11 03:04:18.283608 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 03:04:18.283617 | orchestrator | Saturday 11 April 2026 03:04:08 +0000 (0:00:01.004) 0:00:21.062 ******** 2026-04-11 03:04:18.283624 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:04:18.283631 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:04:18.283638 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:04:18.283645 | orchestrator | 2026-04-11 03:04:18.283652 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 03:04:18.283659 | orchestrator | Saturday 11 April 2026 03:04:08 +0000 (0:00:00.357) 0:00:21.420 ******** 2026-04-11 03:04:18.283677 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:04:18.283684 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:04:18.283691 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:04:18.283697 | orchestrator | 2026-04-11 03:04:18.283704 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 03:04:18.283711 | orchestrator | Saturday 11 April 2026 03:04:09 +0000 (0:00:00.688) 0:00:22.108 ******** 2026-04-11 03:04:18.283718 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.283724 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:18.283731 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:18.283738 | orchestrator | 2026-04-11 03:04:18.283744 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 03:04:18.283751 | orchestrator | Saturday 11 April 2026 03:04:09 +0000 (0:00:00.361) 0:00:22.469 ******** 2026-04-11 03:04:18.283757 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.283764 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:18.283771 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:18.283777 | orchestrator | 2026-04-11 03:04:18.283784 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 03:04:18.283790 | orchestrator | Saturday 11 April 2026 03:04:10 +0000 (0:00:00.766) 0:00:23.235 ******** 2026-04-11 03:04:18.283797 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.283804 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:18.283810 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:18.283817 | orchestrator | 2026-04-11 03:04:18.283823 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 03:04:18.283830 | orchestrator | Saturday 11 April 2026 03:04:10 +0000 (0:00:00.366) 0:00:23.602 ******** 2026-04-11 03:04:18.283837 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-11 03:04:18.283844 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-11 03:04:18.283851 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-11 03:04:18.283858 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-11 03:04:18.283866 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-11 03:04:18.283882 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-11 03:04:18.283890 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-11 03:04:18.283898 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-11 03:04:18.283906 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-11 03:04:18.283914 | orchestrator | 2026-04-11 03:04:18.283925 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 03:04:18.283936 | orchestrator | Saturday 11 April 2026 03:04:11 +0000 (0:00:01.147) 0:00:24.749 ******** 2026-04-11 03:04:18.283969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 03:04:18.283981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 03:04:18.283992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 03:04:18.284002 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.284013 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 03:04:18.284023 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 03:04:18.284035 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 03:04:18.284046 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:18.284057 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 03:04:18.284067 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 03:04:18.284078 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 03:04:18.284089 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:18.284099 | orchestrator | 2026-04-11 03:04:18.284109 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 03:04:18.284119 | orchestrator | Saturday 11 April 2026 03:04:12 +0000 (0:00:00.424) 0:00:25.174 ******** 2026-04-11 03:04:18.284131 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:04:18.284142 | orchestrator | 2026-04-11 03:04:18.284153 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 03:04:18.284165 | orchestrator | Saturday 11 April 2026 03:04:13 +0000 (0:00:00.840) 0:00:26.014 ******** 2026-04-11 03:04:18.284176 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.284189 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:18.284200 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:18.284211 | orchestrator | 2026-04-11 03:04:18.284222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 03:04:18.284232 | orchestrator | Saturday 11 April 2026 03:04:13 +0000 (0:00:00.349) 0:00:26.363 ******** 2026-04-11 03:04:18.284242 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.284252 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:18.284262 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:18.284272 | orchestrator | 2026-04-11 03:04:18.284283 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 03:04:18.284294 | orchestrator | Saturday 11 April 2026 03:04:13 +0000 (0:00:00.363) 0:00:26.726 ******** 2026-04-11 03:04:18.284304 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.284315 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:04:18.284325 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:04:18.284337 | orchestrator | 2026-04-11 03:04:18.284346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 03:04:18.284355 | orchestrator | Saturday 11 April 2026 03:04:14 +0000 (0:00:00.601) 0:00:27.328 ******** 2026-04-11 03:04:18.284365 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:04:18.284375 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:04:18.284385 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:04:18.284396 | orchestrator | 2026-04-11 03:04:18.284407 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 03:04:18.284418 | orchestrator | Saturday 11 April 2026 03:04:14 +0000 (0:00:00.469) 0:00:27.797 ******** 2026-04-11 03:04:18.284440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 03:04:18.284461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 03:04:18.284469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 03:04:18.284476 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.284482 | orchestrator | 2026-04-11 03:04:18.284531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 03:04:18.284539 | orchestrator | Saturday 11 April 2026 03:04:15 +0000 (0:00:00.444) 0:00:28.241 ******** 2026-04-11 03:04:18.284546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 03:04:18.284552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 03:04:18.284559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 03:04:18.284566 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.284572 | orchestrator | 2026-04-11 03:04:18.284579 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 03:04:18.284586 | orchestrator | Saturday 11 April 2026 03:04:15 +0000 (0:00:00.416) 0:00:28.658 ******** 2026-04-11 03:04:18.284592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 03:04:18.284599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 03:04:18.284606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 03:04:18.284612 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:04:18.284619 | orchestrator | 2026-04-11 03:04:18.284626 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 03:04:18.284632 | orchestrator | Saturday 11 April 2026 03:04:16 +0000 (0:00:00.443) 0:00:29.101 ******** 2026-04-11 03:04:18.284639 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:04:18.284646 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:04:18.284653 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:04:18.284659 | orchestrator | 2026-04-11 03:04:18.284666 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 03:04:18.284673 | orchestrator | Saturday 11 April 2026 03:04:16 +0000 (0:00:00.370) 0:00:29.472 ******** 2026-04-11 03:04:18.284679 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 03:04:18.284686 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 03:04:18.284693 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 03:04:18.284699 | orchestrator | 2026-04-11 03:04:18.284706 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 03:04:18.284713 | orchestrator | Saturday 11 April 2026 03:04:17 +0000 (0:00:00.845) 0:00:30.318 ******** 2026-04-11 03:04:18.284720 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 03:04:18.284736 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 03:06:02.369343 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 03:06:02.369455 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 03:06:02.369468 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 03:06:02.369477 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 03:06:02.369485 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 03:06:02.369492 | orchestrator | 2026-04-11 03:06:02.369501 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 03:06:02.369509 | orchestrator | Saturday 11 April 2026 03:04:18 +0000 (0:00:00.918) 0:00:31.237 ******** 2026-04-11 03:06:02.369517 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 03:06:02.369525 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 03:06:02.369532 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 03:06:02.369610 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 03:06:02.369621 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 03:06:02.369629 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 03:06:02.369647 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 03:06:02.369655 | orchestrator | 2026-04-11 03:06:02.369662 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-11 03:06:02.369670 | orchestrator | Saturday 11 April 2026 03:04:20 +0000 (0:00:01.785) 0:00:33.022 ******** 2026-04-11 03:06:02.369677 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:06:02.369685 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:06:02.369693 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-11 03:06:02.369705 | orchestrator | 2026-04-11 03:06:02.369717 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-11 03:06:02.369729 | orchestrator | Saturday 11 April 2026 03:04:20 +0000 (0:00:00.401) 0:00:33.424 ******** 2026-04-11 03:06:02.369744 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 03:06:02.369775 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 03:06:02.369788 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 03:06:02.369802 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 03:06:02.369815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-11 03:06:02.369827 | orchestrator | 2026-04-11 03:06:02.369839 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-11 03:06:02.369853 | orchestrator | Saturday 11 April 2026 03:05:08 +0000 (0:00:47.885) 0:01:21.309 ******** 2026-04-11 03:06:02.369862 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.369871 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.369880 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.369888 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.369896 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.369906 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.369914 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-11 03:06:02.369922 | orchestrator | 2026-04-11 03:06:02.369931 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-11 03:06:02.369939 | orchestrator | Saturday 11 April 2026 03:05:32 +0000 (0:00:24.649) 0:01:45.959 ******** 2026-04-11 03:06:02.369978 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.369994 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370006 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370072 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370082 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370091 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370099 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 03:06:02.370107 | orchestrator | 2026-04-11 03:06:02.370114 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-11 03:06:02.370121 | orchestrator | Saturday 11 April 2026 03:05:44 +0000 (0:00:11.944) 0:01:57.903 ******** 2026-04-11 03:06:02.370129 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370136 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 03:06:02.370172 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 03:06:02.370180 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370188 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 03:06:02.370195 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 03:06:02.370203 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370210 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 03:06:02.370217 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 03:06:02.370224 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370232 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 03:06:02.370266 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 03:06:02.370279 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370291 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 03:06:02.370302 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 03:06:02.370314 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 03:06:02.370325 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 03:06:02.370336 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 03:06:02.370355 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-11 03:06:02.370368 | orchestrator | 2026-04-11 03:06:02.370379 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:06:02.370391 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-11 03:06:02.370404 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-11 03:06:02.370418 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-11 03:06:02.370429 | orchestrator | 2026-04-11 03:06:02.370442 | orchestrator | 2026-04-11 03:06:02.370454 | orchestrator | 2026-04-11 03:06:02.370467 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:06:02.370489 | orchestrator | Saturday 11 April 2026 03:06:02 +0000 (0:00:17.389) 0:02:15.293 ******** 2026-04-11 03:06:02.370497 | orchestrator | =============================================================================== 2026-04-11 03:06:02.370504 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.89s 2026-04-11 03:06:02.370511 | orchestrator | generate keys ---------------------------------------------------------- 24.65s 2026-04-11 03:06:02.370518 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.39s 2026-04-11 03:06:02.370526 | orchestrator | get keys from monitors ------------------------------------------------- 11.94s 2026-04-11 03:06:02.370533 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.39s 2026-04-11 03:06:02.370540 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.79s 2026-04-11 03:06:02.370548 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.76s 2026-04-11 03:06:02.370555 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.18s 2026-04-11 03:06:02.370581 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.15s 2026-04-11 03:06:02.370589 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 1.00s 2026-04-11 03:06:02.370596 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.95s 2026-04-11 03:06:02.370603 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.93s 2026-04-11 03:06:02.370611 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2026-04-11 03:06:02.370628 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.85s 2026-04-11 03:06:02.774752 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.84s 2026-04-11 03:06:02.774847 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.77s 2026-04-11 03:06:02.774864 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.75s 2026-04-11 03:06:02.774876 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2026-04-11 03:06:02.774890 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.73s 2026-04-11 03:06:02.774904 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-04-11 03:06:05.560061 | orchestrator | 2026-04-11 03:06:05 | INFO  | Task d31b3af9-00ab-4e55-9250-39cca7574cab (copy-ceph-keys) was prepared for execution. 2026-04-11 03:06:05.560164 | orchestrator | 2026-04-11 03:06:05 | INFO  | It takes a moment until task d31b3af9-00ab-4e55-9250-39cca7574cab (copy-ceph-keys) has been started and output is visible here. 2026-04-11 03:06:46.677290 | orchestrator | 2026-04-11 03:06:46.677377 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-11 03:06:46.677388 | orchestrator | 2026-04-11 03:06:46.677395 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-11 03:06:46.677403 | orchestrator | Saturday 11 April 2026 03:06:10 +0000 (0:00:00.197) 0:00:00.197 ******** 2026-04-11 03:06:46.677410 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-11 03:06:46.677417 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677424 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677434 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 03:06:46.677445 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677459 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-11 03:06:46.677471 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-11 03:06:46.677509 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-11 03:06:46.677519 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-11 03:06:46.677528 | orchestrator | 2026-04-11 03:06:46.677538 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-11 03:06:46.677560 | orchestrator | Saturday 11 April 2026 03:06:15 +0000 (0:00:04.558) 0:00:04.755 ******** 2026-04-11 03:06:46.677585 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-11 03:06:46.677666 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677678 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677688 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 03:06:46.677698 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677708 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-11 03:06:46.677719 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-11 03:06:46.677730 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-11 03:06:46.677737 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-11 03:06:46.677743 | orchestrator | 2026-04-11 03:06:46.677754 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-11 03:06:46.677763 | orchestrator | Saturday 11 April 2026 03:06:19 +0000 (0:00:04.311) 0:00:09.067 ******** 2026-04-11 03:06:46.677774 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 03:06:46.677784 | orchestrator | 2026-04-11 03:06:46.677794 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-11 03:06:46.677804 | orchestrator | Saturday 11 April 2026 03:06:20 +0000 (0:00:01.122) 0:00:10.190 ******** 2026-04-11 03:06:46.677824 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-11 03:06:46.677837 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677844 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677851 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 03:06:46.677858 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.677865 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-11 03:06:46.677872 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-11 03:06:46.677880 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-11 03:06:46.677887 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-11 03:06:46.677894 | orchestrator | 2026-04-11 03:06:46.677901 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-11 03:06:46.677908 | orchestrator | Saturday 11 April 2026 03:06:35 +0000 (0:00:14.995) 0:00:25.185 ******** 2026-04-11 03:06:46.677915 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-11 03:06:46.677922 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-11 03:06:46.677929 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-11 03:06:46.677936 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-11 03:06:46.677967 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-11 03:06:46.677975 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-11 03:06:46.677983 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-11 03:06:46.677990 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-11 03:06:46.677997 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-11 03:06:46.678004 | orchestrator | 2026-04-11 03:06:46.678011 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-11 03:06:46.678068 | orchestrator | Saturday 11 April 2026 03:06:38 +0000 (0:00:03.383) 0:00:28.568 ******** 2026-04-11 03:06:46.678077 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-11 03:06:46.678085 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.678092 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.678100 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 03:06:46.678107 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-11 03:06:46.678114 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-11 03:06:46.678122 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-11 03:06:46.678128 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-11 03:06:46.678134 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-11 03:06:46.678141 | orchestrator | 2026-04-11 03:06:46.678153 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:06:46.678160 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:06:46.678167 | orchestrator | 2026-04-11 03:06:46.678173 | orchestrator | 2026-04-11 03:06:46.678180 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:06:46.678186 | orchestrator | Saturday 11 April 2026 03:06:46 +0000 (0:00:07.445) 0:00:36.014 ******** 2026-04-11 03:06:46.678192 | orchestrator | =============================================================================== 2026-04-11 03:06:46.678198 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.00s 2026-04-11 03:06:46.678205 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.45s 2026-04-11 03:06:46.678211 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.56s 2026-04-11 03:06:46.678217 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.31s 2026-04-11 03:06:46.678223 | orchestrator | Check if target directories exist --------------------------------------- 3.38s 2026-04-11 03:06:46.678230 | orchestrator | Create share directory -------------------------------------------------- 1.12s 2026-04-11 03:06:59.391593 | orchestrator | 2026-04-11 03:06:59 | INFO  | Task 1331238f-d50c-4b95-88f3-aeea6334d7d9 (cephclient) was prepared for execution. 2026-04-11 03:06:59.391713 | orchestrator | 2026-04-11 03:06:59 | INFO  | It takes a moment until task 1331238f-d50c-4b95-88f3-aeea6334d7d9 (cephclient) has been started and output is visible here. 2026-04-11 03:08:01.849508 | orchestrator | 2026-04-11 03:08:01.849596 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-11 03:08:01.849604 | orchestrator | 2026-04-11 03:08:01.849610 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-11 03:08:01.849615 | orchestrator | Saturday 11 April 2026 03:07:04 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-04-11 03:08:01.849621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-11 03:08:01.849667 | orchestrator | 2026-04-11 03:08:01.849673 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-11 03:08:01.849678 | orchestrator | Saturday 11 April 2026 03:07:04 +0000 (0:00:00.265) 0:00:00.551 ******** 2026-04-11 03:08:01.849683 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-11 03:08:01.849688 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-11 03:08:01.849694 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-11 03:08:01.849699 | orchestrator | 2026-04-11 03:08:01.849703 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-11 03:08:01.849708 | orchestrator | Saturday 11 April 2026 03:07:05 +0000 (0:00:01.340) 0:00:01.892 ******** 2026-04-11 03:08:01.849713 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-11 03:08:01.849718 | orchestrator | 2026-04-11 03:08:01.849722 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-11 03:08:01.849727 | orchestrator | Saturday 11 April 2026 03:07:07 +0000 (0:00:01.602) 0:00:03.495 ******** 2026-04-11 03:08:01.849732 | orchestrator | changed: [testbed-manager] 2026-04-11 03:08:01.849737 | orchestrator | 2026-04-11 03:08:01.849741 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-11 03:08:01.849746 | orchestrator | Saturday 11 April 2026 03:07:08 +0000 (0:00:00.953) 0:00:04.449 ******** 2026-04-11 03:08:01.849750 | orchestrator | changed: [testbed-manager] 2026-04-11 03:08:01.849755 | orchestrator | 2026-04-11 03:08:01.849759 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-11 03:08:01.849764 | orchestrator | Saturday 11 April 2026 03:07:09 +0000 (0:00:01.011) 0:00:05.460 ******** 2026-04-11 03:08:01.849769 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-11 03:08:01.849773 | orchestrator | ok: [testbed-manager] 2026-04-11 03:08:01.849778 | orchestrator | 2026-04-11 03:08:01.849782 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-11 03:08:01.849787 | orchestrator | Saturday 11 April 2026 03:07:51 +0000 (0:00:41.582) 0:00:47.042 ******** 2026-04-11 03:08:01.849792 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-11 03:08:01.849797 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-11 03:08:01.849801 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-11 03:08:01.849806 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-11 03:08:01.849810 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-11 03:08:01.849815 | orchestrator | 2026-04-11 03:08:01.849820 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-11 03:08:01.849824 | orchestrator | Saturday 11 April 2026 03:07:55 +0000 (0:00:04.317) 0:00:51.360 ******** 2026-04-11 03:08:01.849829 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-11 03:08:01.849833 | orchestrator | 2026-04-11 03:08:01.849838 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-11 03:08:01.849842 | orchestrator | Saturday 11 April 2026 03:07:55 +0000 (0:00:00.497) 0:00:51.858 ******** 2026-04-11 03:08:01.849846 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:08:01.849851 | orchestrator | 2026-04-11 03:08:01.849856 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-11 03:08:01.849860 | orchestrator | Saturday 11 April 2026 03:07:56 +0000 (0:00:00.170) 0:00:52.028 ******** 2026-04-11 03:08:01.849864 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:08:01.849869 | orchestrator | 2026-04-11 03:08:01.849885 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-11 03:08:01.849890 | orchestrator | Saturday 11 April 2026 03:07:56 +0000 (0:00:00.572) 0:00:52.601 ******** 2026-04-11 03:08:01.849895 | orchestrator | changed: [testbed-manager] 2026-04-11 03:08:01.849907 | orchestrator | 2026-04-11 03:08:01.849912 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-11 03:08:01.849916 | orchestrator | Saturday 11 April 2026 03:07:58 +0000 (0:00:01.751) 0:00:54.352 ******** 2026-04-11 03:08:01.849921 | orchestrator | changed: [testbed-manager] 2026-04-11 03:08:01.849925 | orchestrator | 2026-04-11 03:08:01.849930 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-11 03:08:01.849934 | orchestrator | Saturday 11 April 2026 03:07:59 +0000 (0:00:00.784) 0:00:55.136 ******** 2026-04-11 03:08:01.849939 | orchestrator | changed: [testbed-manager] 2026-04-11 03:08:01.849944 | orchestrator | 2026-04-11 03:08:01.849948 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-11 03:08:01.849953 | orchestrator | Saturday 11 April 2026 03:07:59 +0000 (0:00:00.639) 0:00:55.776 ******** 2026-04-11 03:08:01.849957 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-11 03:08:01.849962 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-11 03:08:01.849966 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-11 03:08:01.849971 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-11 03:08:01.849976 | orchestrator | 2026-04-11 03:08:01.849980 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:08:01.849985 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 03:08:01.849990 | orchestrator | 2026-04-11 03:08:01.849995 | orchestrator | 2026-04-11 03:08:01.850011 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:08:01.850049 | orchestrator | Saturday 11 April 2026 03:08:01 +0000 (0:00:01.580) 0:00:57.356 ******** 2026-04-11 03:08:01.850054 | orchestrator | =============================================================================== 2026-04-11 03:08:01.850059 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.58s 2026-04-11 03:08:01.850065 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.32s 2026-04-11 03:08:01.850071 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.75s 2026-04-11 03:08:01.850076 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.60s 2026-04-11 03:08:01.850082 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.58s 2026-04-11 03:08:01.850087 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.34s 2026-04-11 03:08:01.850092 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.01s 2026-04-11 03:08:01.850098 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-04-11 03:08:01.850103 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2026-04-11 03:08:01.850108 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-04-11 03:08:01.850114 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.57s 2026-04-11 03:08:01.850119 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-04-11 03:08:01.850125 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2026-04-11 03:08:01.850130 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.17s 2026-04-11 03:08:04.535441 | orchestrator | 2026-04-11 03:08:04 | INFO  | Task 562d9969-b24e-442c-a3af-77399da60f6c (ceph-bootstrap-dashboard) was prepared for execution. 2026-04-11 03:08:04.535536 | orchestrator | 2026-04-11 03:08:04 | INFO  | It takes a moment until task 562d9969-b24e-442c-a3af-77399da60f6c (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-04-11 03:09:25.686668 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 03:09:25.686805 | orchestrator | 2.16.14 2026-04-11 03:09:25.686815 | orchestrator | 2026-04-11 03:09:25.686822 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-11 03:09:25.686845 | orchestrator | 2026-04-11 03:09:25.686850 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-11 03:09:25.686856 | orchestrator | Saturday 11 April 2026 03:08:09 +0000 (0:00:00.477) 0:00:00.478 ******** 2026-04-11 03:09:25.686861 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.686868 | orchestrator | 2026-04-11 03:09:25.686873 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-11 03:09:25.686878 | orchestrator | Saturday 11 April 2026 03:08:11 +0000 (0:00:02.281) 0:00:02.759 ******** 2026-04-11 03:09:25.686883 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.686888 | orchestrator | 2026-04-11 03:09:25.686893 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-11 03:09:25.686899 | orchestrator | Saturday 11 April 2026 03:08:12 +0000 (0:00:01.092) 0:00:03.852 ******** 2026-04-11 03:09:25.686904 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.686909 | orchestrator | 2026-04-11 03:09:25.686914 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-11 03:09:25.686919 | orchestrator | Saturday 11 April 2026 03:08:13 +0000 (0:00:01.079) 0:00:04.932 ******** 2026-04-11 03:09:25.686924 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.686929 | orchestrator | 2026-04-11 03:09:25.686934 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-11 03:09:25.686939 | orchestrator | Saturday 11 April 2026 03:08:15 +0000 (0:00:01.276) 0:00:06.209 ******** 2026-04-11 03:09:25.686944 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.686949 | orchestrator | 2026-04-11 03:09:25.686965 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-11 03:09:25.686971 | orchestrator | Saturday 11 April 2026 03:08:16 +0000 (0:00:01.141) 0:00:07.350 ******** 2026-04-11 03:09:25.686979 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.686990 | orchestrator | 2026-04-11 03:09:25.687002 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-11 03:09:25.687010 | orchestrator | Saturday 11 April 2026 03:08:17 +0000 (0:00:01.123) 0:00:08.474 ******** 2026-04-11 03:09:25.687018 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.687026 | orchestrator | 2026-04-11 03:09:25.687034 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-11 03:09:25.687043 | orchestrator | Saturday 11 April 2026 03:08:19 +0000 (0:00:02.078) 0:00:10.552 ******** 2026-04-11 03:09:25.687049 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.687056 | orchestrator | 2026-04-11 03:09:25.687064 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-11 03:09:25.687071 | orchestrator | Saturday 11 April 2026 03:08:20 +0000 (0:00:01.271) 0:00:11.824 ******** 2026-04-11 03:09:25.687080 | orchestrator | changed: [testbed-manager] 2026-04-11 03:09:25.687145 | orchestrator | 2026-04-11 03:09:25.687153 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-11 03:09:25.687158 | orchestrator | Saturday 11 April 2026 03:09:00 +0000 (0:00:39.614) 0:00:51.438 ******** 2026-04-11 03:09:25.687163 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:09:25.687169 | orchestrator | 2026-04-11 03:09:25.687174 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-11 03:09:25.687179 | orchestrator | 2026-04-11 03:09:25.687184 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-11 03:09:25.687189 | orchestrator | Saturday 11 April 2026 03:09:00 +0000 (0:00:00.162) 0:00:51.601 ******** 2026-04-11 03:09:25.687194 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:09:25.687199 | orchestrator | 2026-04-11 03:09:25.687204 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-11 03:09:25.687209 | orchestrator | 2026-04-11 03:09:25.687214 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-11 03:09:25.687219 | orchestrator | Saturday 11 April 2026 03:09:02 +0000 (0:00:01.845) 0:00:53.446 ******** 2026-04-11 03:09:25.687233 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:09:25.687239 | orchestrator | 2026-04-11 03:09:25.687245 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-11 03:09:25.687251 | orchestrator | 2026-04-11 03:09:25.687257 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-11 03:09:25.687263 | orchestrator | Saturday 11 April 2026 03:09:13 +0000 (0:00:11.318) 0:01:04.765 ******** 2026-04-11 03:09:25.687269 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:09:25.687275 | orchestrator | 2026-04-11 03:09:25.687281 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:09:25.687288 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 03:09:25.687296 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:09:25.687302 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:09:25.687308 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:09:25.687314 | orchestrator | 2026-04-11 03:09:25.687320 | orchestrator | 2026-04-11 03:09:25.687326 | orchestrator | 2026-04-11 03:09:25.687333 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:09:25.687339 | orchestrator | Saturday 11 April 2026 03:09:25 +0000 (0:00:11.375) 0:01:16.140 ******** 2026-04-11 03:09:25.687344 | orchestrator | =============================================================================== 2026-04-11 03:09:25.687349 | orchestrator | Create admin user ------------------------------------------------------ 39.61s 2026-04-11 03:09:25.687369 | orchestrator | Restart ceph manager service ------------------------------------------- 24.54s 2026-04-11 03:09:25.687375 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.28s 2026-04-11 03:09:25.687380 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-04-11 03:09:25.687385 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.28s 2026-04-11 03:09:25.687390 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.27s 2026-04-11 03:09:25.687396 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.14s 2026-04-11 03:09:25.687404 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.12s 2026-04-11 03:09:25.687416 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.09s 2026-04-11 03:09:25.687427 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.08s 2026-04-11 03:09:25.687435 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-04-11 03:09:26.080341 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-04-11 03:09:28.421297 | orchestrator | 2026-04-11 03:09:28 | INFO  | Task 8cd89b8c-b083-4241-8861-0461099b7ec5 (keystone) was prepared for execution. 2026-04-11 03:09:28.421746 | orchestrator | 2026-04-11 03:09:28 | INFO  | It takes a moment until task 8cd89b8c-b083-4241-8861-0461099b7ec5 (keystone) has been started and output is visible here. 2026-04-11 03:09:36.232628 | orchestrator | 2026-04-11 03:09:36.232760 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:09:36.232773 | orchestrator | 2026-04-11 03:09:36.232794 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:09:36.232801 | orchestrator | Saturday 11 April 2026 03:09:32 +0000 (0:00:00.276) 0:00:00.276 ******** 2026-04-11 03:09:36.232808 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:09:36.232816 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:09:36.232822 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:09:36.232829 | orchestrator | 2026-04-11 03:09:36.232852 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:09:36.232862 | orchestrator | Saturday 11 April 2026 03:09:33 +0000 (0:00:00.381) 0:00:00.658 ******** 2026-04-11 03:09:36.232873 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-11 03:09:36.232889 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-11 03:09:36.232901 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-11 03:09:36.232911 | orchestrator | 2026-04-11 03:09:36.232922 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-11 03:09:36.232933 | orchestrator | 2026-04-11 03:09:36.232943 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 03:09:36.232954 | orchestrator | Saturday 11 April 2026 03:09:33 +0000 (0:00:00.505) 0:00:01.164 ******** 2026-04-11 03:09:36.232966 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:09:36.232978 | orchestrator | 2026-04-11 03:09:36.232988 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-11 03:09:36.232999 | orchestrator | Saturday 11 April 2026 03:09:34 +0000 (0:00:00.614) 0:00:01.778 ******** 2026-04-11 03:09:36.233016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:36.233033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:36.233075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:36.233103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:36.233116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:36.233128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:36.233140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:36.233151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:36.233164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:36.233190 | orchestrator | 2026-04-11 03:09:36.233200 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-11 03:09:36.233218 | orchestrator | Saturday 11 April 2026 03:09:36 +0000 (0:00:01.741) 0:00:03.520 ******** 2026-04-11 03:09:42.367269 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:09:42.367377 | orchestrator | 2026-04-11 03:09:42.367411 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-11 03:09:42.367425 | orchestrator | Saturday 11 April 2026 03:09:36 +0000 (0:00:00.325) 0:00:03.846 ******** 2026-04-11 03:09:42.367436 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:09:42.367448 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:09:42.367459 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:09:42.367470 | orchestrator | 2026-04-11 03:09:42.367482 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-11 03:09:42.367493 | orchestrator | Saturday 11 April 2026 03:09:36 +0000 (0:00:00.323) 0:00:04.169 ******** 2026-04-11 03:09:42.367505 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:09:42.367516 | orchestrator | 2026-04-11 03:09:42.367527 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 03:09:42.367538 | orchestrator | Saturday 11 April 2026 03:09:37 +0000 (0:00:00.879) 0:00:05.049 ******** 2026-04-11 03:09:42.367550 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:09:42.367561 | orchestrator | 2026-04-11 03:09:42.367572 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-11 03:09:42.367583 | orchestrator | Saturday 11 April 2026 03:09:38 +0000 (0:00:00.658) 0:00:05.708 ******** 2026-04-11 03:09:42.367601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:42.367619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:42.367633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:42.367694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:42.367740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:42.367759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:42.367771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:42.367784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:42.367808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:42.367823 | orchestrator | 2026-04-11 03:09:42.367836 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-11 03:09:42.367849 | orchestrator | Saturday 11 April 2026 03:09:41 +0000 (0:00:03.335) 0:00:09.043 ******** 2026-04-11 03:09:42.367875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:09:43.317415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:43.317521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:09:43.317537 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:09:43.317554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:09:43.317591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:43.317609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:09:43.317620 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:09:43.317652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:09:43.317665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:43.317677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:09:43.317698 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:09:43.317831 | orchestrator | 2026-04-11 03:09:43.317856 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-11 03:09:43.317876 | orchestrator | Saturday 11 April 2026 03:09:42 +0000 (0:00:00.613) 0:00:09.656 ******** 2026-04-11 03:09:43.317896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:09:43.317928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:43.317965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:09:46.550230 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:09:46.550302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:09:46.550310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:46.550332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:09:46.550337 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:09:46.550350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:09:46.550354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:46.550368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:09:46.550372 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:09:46.550376 | orchestrator | 2026-04-11 03:09:46.550381 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-11 03:09:46.550386 | orchestrator | Saturday 11 April 2026 03:09:43 +0000 (0:00:00.955) 0:00:10.611 ******** 2026-04-11 03:09:46.550390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:46.550398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:46.550406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:46.550427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:51.536678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:51.536857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:09:51.536873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:51.536881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:51.536901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:51.536909 | orchestrator | 2026-04-11 03:09:51.536917 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-11 03:09:51.536926 | orchestrator | Saturday 11 April 2026 03:09:46 +0000 (0:00:03.230) 0:00:13.842 ******** 2026-04-11 03:09:51.536952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:51.536967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:51.536975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:51.536982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:51.536994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:09:51.537006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:55.267691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:55.267834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:55.267843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:09:55.267849 | orchestrator | 2026-04-11 03:09:55.267855 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-11 03:09:55.267861 | orchestrator | Saturday 11 April 2026 03:09:51 +0000 (0:00:04.984) 0:00:18.827 ******** 2026-04-11 03:09:55.267866 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:09:55.267871 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:09:55.267875 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:09:55.267879 | orchestrator | 2026-04-11 03:09:55.267884 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-11 03:09:55.267888 | orchestrator | Saturday 11 April 2026 03:09:52 +0000 (0:00:01.353) 0:00:20.180 ******** 2026-04-11 03:09:55.267892 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:09:55.267896 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:09:55.267900 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:09:55.267904 | orchestrator | 2026-04-11 03:09:55.267908 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-11 03:09:55.267912 | orchestrator | Saturday 11 April 2026 03:09:53 +0000 (0:00:00.821) 0:00:21.002 ******** 2026-04-11 03:09:55.267916 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:09:55.267931 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:09:55.267936 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:09:55.267940 | orchestrator | 2026-04-11 03:09:55.267944 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-11 03:09:55.267948 | orchestrator | Saturday 11 April 2026 03:09:54 +0000 (0:00:00.564) 0:00:21.566 ******** 2026-04-11 03:09:55.267952 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:09:55.267956 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:09:55.267960 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:09:55.267964 | orchestrator | 2026-04-11 03:09:55.267968 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-11 03:09:55.267972 | orchestrator | Saturday 11 April 2026 03:09:54 +0000 (0:00:00.356) 0:00:21.922 ******** 2026-04-11 03:09:55.268008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:09:55.268014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:55.268020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:09:55.268024 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:09:55.268028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:09:55.268036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:09:55.268046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:09:55.268055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-11 03:10:14.533101 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:10:14.533199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 03:10:14.533214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 03:10:14.533223 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:10:14.533231 | orchestrator | 2026-04-11 03:10:14.533239 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 03:10:14.533248 | orchestrator | Saturday 11 April 2026 03:09:55 +0000 (0:00:00.636) 0:00:22.559 ******** 2026-04-11 03:10:14.533255 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:10:14.533263 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:10:14.533270 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:10:14.533277 | orchestrator | 2026-04-11 03:10:14.533285 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-11 03:10:14.533292 | orchestrator | Saturday 11 April 2026 03:09:55 +0000 (0:00:00.326) 0:00:22.886 ******** 2026-04-11 03:10:14.533300 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 03:10:14.533327 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 03:10:14.533345 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 03:10:14.533353 | orchestrator | 2026-04-11 03:10:14.533361 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-11 03:10:14.533368 | orchestrator | Saturday 11 April 2026 03:09:57 +0000 (0:00:01.846) 0:00:24.732 ******** 2026-04-11 03:10:14.533376 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:10:14.533383 | orchestrator | 2026-04-11 03:10:14.533390 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-11 03:10:14.533398 | orchestrator | Saturday 11 April 2026 03:09:58 +0000 (0:00:01.030) 0:00:25.763 ******** 2026-04-11 03:10:14.533405 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:10:14.533412 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:10:14.533420 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:10:14.533427 | orchestrator | 2026-04-11 03:10:14.533434 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-11 03:10:14.533441 | orchestrator | Saturday 11 April 2026 03:09:59 +0000 (0:00:00.627) 0:00:26.390 ******** 2026-04-11 03:10:14.533449 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 03:10:14.533456 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:10:14.533463 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 03:10:14.533470 | orchestrator | 2026-04-11 03:10:14.533478 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-11 03:10:14.533486 | orchestrator | Saturday 11 April 2026 03:10:00 +0000 (0:00:01.150) 0:00:27.541 ******** 2026-04-11 03:10:14.533493 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:10:14.533501 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:10:14.533508 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:10:14.533516 | orchestrator | 2026-04-11 03:10:14.533523 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-11 03:10:14.533530 | orchestrator | Saturday 11 April 2026 03:10:00 +0000 (0:00:00.590) 0:00:28.131 ******** 2026-04-11 03:10:14.533538 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 03:10:14.533545 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 03:10:14.533552 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 03:10:14.533560 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 03:10:14.533567 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 03:10:14.533574 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 03:10:14.533581 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 03:10:14.533589 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 03:10:14.533609 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 03:10:14.533617 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 03:10:14.533624 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 03:10:14.533631 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 03:10:14.533638 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 03:10:14.533646 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 03:10:14.533661 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 03:10:14.533670 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 03:10:14.533678 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 03:10:14.533686 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 03:10:14.533695 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 03:10:14.533704 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 03:10:14.533712 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 03:10:14.533720 | orchestrator | 2026-04-11 03:10:14.533729 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-11 03:10:14.533781 | orchestrator | Saturday 11 April 2026 03:10:09 +0000 (0:00:08.961) 0:00:37.093 ******** 2026-04-11 03:10:14.533790 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 03:10:14.533798 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 03:10:14.533806 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 03:10:14.533814 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 03:10:14.533822 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 03:10:14.533831 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 03:10:14.533844 | orchestrator | 2026-04-11 03:10:14.533856 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-11 03:10:14.533868 | orchestrator | Saturday 11 April 2026 03:10:12 +0000 (0:00:02.498) 0:00:39.592 ******** 2026-04-11 03:10:14.533883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:10:14.533909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:11:56.597036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-11 03:11:56.597152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:11:56.597185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:11:56.597196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 03:11:56.597208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:11:56.597238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:11:56.597272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 03:11:56.597284 | orchestrator | 2026-04-11 03:11:56.597296 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 03:11:56.597307 | orchestrator | Saturday 11 April 2026 03:10:14 +0000 (0:00:02.232) 0:00:41.824 ******** 2026-04-11 03:11:56.597316 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:11:56.597327 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:11:56.597338 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:11:56.597347 | orchestrator | 2026-04-11 03:11:56.597357 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-11 03:11:56.597367 | orchestrator | Saturday 11 April 2026 03:10:15 +0000 (0:00:00.613) 0:00:42.437 ******** 2026-04-11 03:11:56.597377 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:11:56.597387 | orchestrator | 2026-04-11 03:11:56.597397 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-11 03:11:56.597407 | orchestrator | Saturday 11 April 2026 03:10:17 +0000 (0:00:02.272) 0:00:44.710 ******** 2026-04-11 03:11:56.597418 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:11:56.597428 | orchestrator | 2026-04-11 03:11:56.597438 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-11 03:11:56.597448 | orchestrator | Saturday 11 April 2026 03:10:19 +0000 (0:00:02.226) 0:00:46.937 ******** 2026-04-11 03:11:56.597458 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:11:56.597469 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:11:56.597479 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:11:56.597489 | orchestrator | 2026-04-11 03:11:56.597499 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-11 03:11:56.597509 | orchestrator | Saturday 11 April 2026 03:10:20 +0000 (0:00:00.865) 0:00:47.803 ******** 2026-04-11 03:11:56.597519 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:11:56.597529 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:11:56.597546 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:11:56.597557 | orchestrator | 2026-04-11 03:11:56.597567 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-11 03:11:56.597578 | orchestrator | Saturday 11 April 2026 03:10:20 +0000 (0:00:00.344) 0:00:48.147 ******** 2026-04-11 03:11:56.597589 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:11:56.597600 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:11:56.597611 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:11:56.597622 | orchestrator | 2026-04-11 03:11:56.597633 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-11 03:11:56.597644 | orchestrator | Saturday 11 April 2026 03:10:21 +0000 (0:00:00.591) 0:00:48.739 ******** 2026-04-11 03:11:56.597654 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:11:56.597665 | orchestrator | 2026-04-11 03:11:56.597676 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-11 03:11:56.597687 | orchestrator | Saturday 11 April 2026 03:10:36 +0000 (0:00:14.667) 0:01:03.406 ******** 2026-04-11 03:11:56.597698 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:11:56.597710 | orchestrator | 2026-04-11 03:11:56.597721 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-11 03:11:56.597741 | orchestrator | Saturday 11 April 2026 03:10:47 +0000 (0:00:11.052) 0:01:14.459 ******** 2026-04-11 03:11:56.597752 | orchestrator | 2026-04-11 03:11:56.597762 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-11 03:11:56.597772 | orchestrator | Saturday 11 April 2026 03:10:47 +0000 (0:00:00.082) 0:01:14.541 ******** 2026-04-11 03:11:56.597782 | orchestrator | 2026-04-11 03:11:56.597794 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-11 03:11:56.597832 | orchestrator | Saturday 11 April 2026 03:10:47 +0000 (0:00:00.075) 0:01:14.616 ******** 2026-04-11 03:11:56.597843 | orchestrator | 2026-04-11 03:11:56.597852 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-11 03:11:56.597862 | orchestrator | Saturday 11 April 2026 03:10:47 +0000 (0:00:00.082) 0:01:14.699 ******** 2026-04-11 03:11:56.597872 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:11:56.597882 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:11:56.597892 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:11:56.597902 | orchestrator | 2026-04-11 03:11:56.597912 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-11 03:11:56.597923 | orchestrator | Saturday 11 April 2026 03:11:37 +0000 (0:00:49.949) 0:02:04.649 ******** 2026-04-11 03:11:56.597933 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:11:56.597943 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:11:56.597953 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:11:56.597962 | orchestrator | 2026-04-11 03:11:56.597973 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-11 03:11:56.597983 | orchestrator | Saturday 11 April 2026 03:11:48 +0000 (0:00:10.981) 0:02:15.630 ******** 2026-04-11 03:11:56.597994 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:11:56.598005 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:11:56.598090 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:11:56.598133 | orchestrator | 2026-04-11 03:11:56.598145 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 03:11:56.598157 | orchestrator | Saturday 11 April 2026 03:11:55 +0000 (0:00:07.625) 0:02:23.255 ******** 2026-04-11 03:11:56.598181 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:12:47.729616 | orchestrator | 2026-04-11 03:12:47.729727 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-11 03:12:47.729741 | orchestrator | Saturday 11 April 2026 03:11:56 +0000 (0:00:00.634) 0:02:23.889 ******** 2026-04-11 03:12:47.729751 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:12:47.729760 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:12:47.729768 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:12:47.729776 | orchestrator | 2026-04-11 03:12:47.729784 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-11 03:12:47.729793 | orchestrator | Saturday 11 April 2026 03:11:57 +0000 (0:00:01.208) 0:02:25.098 ******** 2026-04-11 03:12:47.729801 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:12:47.729810 | orchestrator | 2026-04-11 03:12:47.729818 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-11 03:12:47.729826 | orchestrator | Saturday 11 April 2026 03:11:59 +0000 (0:00:01.742) 0:02:26.840 ******** 2026-04-11 03:12:47.729876 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-11 03:12:47.729888 | orchestrator | 2026-04-11 03:12:47.729896 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-11 03:12:47.729904 | orchestrator | Saturday 11 April 2026 03:12:11 +0000 (0:00:11.759) 0:02:38.599 ******** 2026-04-11 03:12:47.729912 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-11 03:12:47.729920 | orchestrator | 2026-04-11 03:12:47.729928 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-11 03:12:47.729936 | orchestrator | Saturday 11 April 2026 03:12:35 +0000 (0:00:24.229) 0:03:02.828 ******** 2026-04-11 03:12:47.729964 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-11 03:12:47.729974 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-11 03:12:47.729982 | orchestrator | 2026-04-11 03:12:47.729990 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-11 03:12:47.729998 | orchestrator | Saturday 11 April 2026 03:12:42 +0000 (0:00:06.801) 0:03:09.630 ******** 2026-04-11 03:12:47.730006 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:12:47.730014 | orchestrator | 2026-04-11 03:12:47.730066 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-11 03:12:47.730074 | orchestrator | Saturday 11 April 2026 03:12:42 +0000 (0:00:00.148) 0:03:09.779 ******** 2026-04-11 03:12:47.730082 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:12:47.730090 | orchestrator | 2026-04-11 03:12:47.730098 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-11 03:12:47.730119 | orchestrator | Saturday 11 April 2026 03:12:42 +0000 (0:00:00.144) 0:03:09.924 ******** 2026-04-11 03:12:47.730127 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:12:47.730135 | orchestrator | 2026-04-11 03:12:47.730143 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-11 03:12:47.730151 | orchestrator | Saturday 11 April 2026 03:12:42 +0000 (0:00:00.138) 0:03:10.062 ******** 2026-04-11 03:12:47.730161 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:12:47.730170 | orchestrator | 2026-04-11 03:12:47.730179 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-11 03:12:47.730188 | orchestrator | Saturday 11 April 2026 03:12:43 +0000 (0:00:00.604) 0:03:10.667 ******** 2026-04-11 03:12:47.730197 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:12:47.730206 | orchestrator | 2026-04-11 03:12:47.730216 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 03:12:47.730225 | orchestrator | Saturday 11 April 2026 03:12:46 +0000 (0:00:03.402) 0:03:14.069 ******** 2026-04-11 03:12:47.730233 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:12:47.730242 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:12:47.730251 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:12:47.730260 | orchestrator | 2026-04-11 03:12:47.730269 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:12:47.730279 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 03:12:47.730289 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-11 03:12:47.730299 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-11 03:12:47.730307 | orchestrator | 2026-04-11 03:12:47.730317 | orchestrator | 2026-04-11 03:12:47.730327 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:12:47.730335 | orchestrator | Saturday 11 April 2026 03:12:47 +0000 (0:00:00.494) 0:03:14.563 ******** 2026-04-11 03:12:47.730344 | orchestrator | =============================================================================== 2026-04-11 03:12:47.730354 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 49.95s 2026-04-11 03:12:47.730364 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.23s 2026-04-11 03:12:47.730372 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.67s 2026-04-11 03:12:47.730381 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.76s 2026-04-11 03:12:47.730390 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.05s 2026-04-11 03:12:47.730399 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.98s 2026-04-11 03:12:47.730408 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.96s 2026-04-11 03:12:47.730424 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.63s 2026-04-11 03:12:47.730433 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.80s 2026-04-11 03:12:47.730458 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.98s 2026-04-11 03:12:47.730467 | orchestrator | keystone : Creating default user role ----------------------------------- 3.40s 2026-04-11 03:12:47.730477 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.34s 2026-04-11 03:12:47.730485 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.23s 2026-04-11 03:12:47.730494 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.50s 2026-04-11 03:12:47.730503 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.27s 2026-04-11 03:12:47.730512 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.23s 2026-04-11 03:12:47.730519 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.23s 2026-04-11 03:12:47.730531 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.85s 2026-04-11 03:12:47.730544 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.74s 2026-04-11 03:12:47.730558 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.74s 2026-04-11 03:12:50.266452 | orchestrator | 2026-04-11 03:12:50 | INFO  | Task 086de91c-d92d-42f4-8e62-fd1348542b5b (placement) was prepared for execution. 2026-04-11 03:12:50.267168 | orchestrator | 2026-04-11 03:12:50 | INFO  | It takes a moment until task 086de91c-d92d-42f4-8e62-fd1348542b5b (placement) has been started and output is visible here. 2026-04-11 03:13:26.859102 | orchestrator | 2026-04-11 03:13:26.859214 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:13:26.859231 | orchestrator | 2026-04-11 03:13:26.859243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:13:26.859254 | orchestrator | Saturday 11 April 2026 03:12:55 +0000 (0:00:00.321) 0:00:00.321 ******** 2026-04-11 03:13:26.859264 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:13:26.859277 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:13:26.859288 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:13:26.859299 | orchestrator | 2026-04-11 03:13:26.859310 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:13:26.859320 | orchestrator | Saturday 11 April 2026 03:12:55 +0000 (0:00:00.333) 0:00:00.655 ******** 2026-04-11 03:13:26.859332 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-11 03:13:26.859353 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-11 03:13:26.859360 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-11 03:13:26.859366 | orchestrator | 2026-04-11 03:13:26.859373 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-11 03:13:26.859379 | orchestrator | 2026-04-11 03:13:26.859385 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-11 03:13:26.859392 | orchestrator | Saturday 11 April 2026 03:12:55 +0000 (0:00:00.489) 0:00:01.144 ******** 2026-04-11 03:13:26.859398 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:13:26.859406 | orchestrator | 2026-04-11 03:13:26.859412 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-11 03:13:26.859418 | orchestrator | Saturday 11 April 2026 03:12:56 +0000 (0:00:00.627) 0:00:01.772 ******** 2026-04-11 03:13:26.859425 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-11 03:13:26.859431 | orchestrator | 2026-04-11 03:13:26.859437 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-11 03:13:26.859443 | orchestrator | Saturday 11 April 2026 03:13:00 +0000 (0:00:03.872) 0:00:05.644 ******** 2026-04-11 03:13:26.859467 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-11 03:13:26.859474 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-11 03:13:26.859480 | orchestrator | 2026-04-11 03:13:26.859487 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-11 03:13:26.859493 | orchestrator | Saturday 11 April 2026 03:13:06 +0000 (0:00:06.568) 0:00:12.213 ******** 2026-04-11 03:13:26.859503 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-11 03:13:26.859513 | orchestrator | 2026-04-11 03:13:26.859522 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-11 03:13:26.859531 | orchestrator | Saturday 11 April 2026 03:13:10 +0000 (0:00:03.753) 0:00:15.966 ******** 2026-04-11 03:13:26.859539 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:13:26.859548 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-11 03:13:26.859556 | orchestrator | 2026-04-11 03:13:26.859565 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-11 03:13:26.859574 | orchestrator | Saturday 11 April 2026 03:13:14 +0000 (0:00:04.261) 0:00:20.228 ******** 2026-04-11 03:13:26.859583 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:13:26.859593 | orchestrator | 2026-04-11 03:13:26.859602 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-11 03:13:26.859610 | orchestrator | Saturday 11 April 2026 03:13:18 +0000 (0:00:03.200) 0:00:23.429 ******** 2026-04-11 03:13:26.859619 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-11 03:13:26.859628 | orchestrator | 2026-04-11 03:13:26.859657 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-11 03:13:26.859678 | orchestrator | Saturday 11 April 2026 03:13:22 +0000 (0:00:04.228) 0:00:27.658 ******** 2026-04-11 03:13:26.859689 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:13:26.859696 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:13:26.859704 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:13:26.859711 | orchestrator | 2026-04-11 03:13:26.859718 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-11 03:13:26.859725 | orchestrator | Saturday 11 April 2026 03:13:22 +0000 (0:00:00.314) 0:00:27.972 ******** 2026-04-11 03:13:26.859736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:26.859770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:26.859787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:26.859795 | orchestrator | 2026-04-11 03:13:26.859803 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-11 03:13:26.859810 | orchestrator | Saturday 11 April 2026 03:13:23 +0000 (0:00:01.124) 0:00:29.096 ******** 2026-04-11 03:13:26.859817 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:13:26.859824 | orchestrator | 2026-04-11 03:13:26.859832 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-11 03:13:26.859838 | orchestrator | Saturday 11 April 2026 03:13:24 +0000 (0:00:00.375) 0:00:29.472 ******** 2026-04-11 03:13:26.859845 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:13:26.859852 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:13:26.859859 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:13:26.859920 | orchestrator | 2026-04-11 03:13:26.859929 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-11 03:13:26.859937 | orchestrator | Saturday 11 April 2026 03:13:24 +0000 (0:00:00.342) 0:00:29.815 ******** 2026-04-11 03:13:26.859944 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:13:26.859952 | orchestrator | 2026-04-11 03:13:26.859959 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-11 03:13:26.859966 | orchestrator | Saturday 11 April 2026 03:13:25 +0000 (0:00:00.589) 0:00:30.405 ******** 2026-04-11 03:13:26.859973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:26.859990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:29.900821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:29.900985 | orchestrator | 2026-04-11 03:13:29.901004 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-11 03:13:29.901017 | orchestrator | Saturday 11 April 2026 03:13:26 +0000 (0:00:01.743) 0:00:32.149 ******** 2026-04-11 03:13:29.901032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:29.901045 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:13:29.901058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:29.901070 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:13:29.901081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:29.901130 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:13:29.901152 | orchestrator | 2026-04-11 03:13:29.901172 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-11 03:13:29.901214 | orchestrator | Saturday 11 April 2026 03:13:27 +0000 (0:00:00.561) 0:00:32.710 ******** 2026-04-11 03:13:29.901246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:29.901266 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:13:29.901288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:29.901308 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:13:29.901329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:29.901351 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:13:29.901371 | orchestrator | 2026-04-11 03:13:29.901392 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-11 03:13:29.901412 | orchestrator | Saturday 11 April 2026 03:13:28 +0000 (0:00:00.809) 0:00:33.520 ******** 2026-04-11 03:13:29.901449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:29.901494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:37.526188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:37.526391 | orchestrator | 2026-04-11 03:13:37.526415 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-11 03:13:37.526425 | orchestrator | Saturday 11 April 2026 03:13:29 +0000 (0:00:01.671) 0:00:35.192 ******** 2026-04-11 03:13:37.526434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:37.526444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:37.526489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:13:37.526498 | orchestrator | 2026-04-11 03:13:37.526510 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-11 03:13:37.526523 | orchestrator | Saturday 11 April 2026 03:13:32 +0000 (0:00:02.775) 0:00:37.968 ******** 2026-04-11 03:13:37.526557 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-11 03:13:37.526577 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-11 03:13:37.526590 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-11 03:13:37.526603 | orchestrator | 2026-04-11 03:13:37.526615 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-11 03:13:37.526626 | orchestrator | Saturday 11 April 2026 03:13:34 +0000 (0:00:01.451) 0:00:39.419 ******** 2026-04-11 03:13:37.526638 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:13:37.526653 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:13:37.526665 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:13:37.526677 | orchestrator | 2026-04-11 03:13:37.526690 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-11 03:13:37.526704 | orchestrator | Saturday 11 April 2026 03:13:35 +0000 (0:00:01.387) 0:00:40.807 ******** 2026-04-11 03:13:37.526719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:37.526744 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:13:37.526758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:37.526772 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:13:37.526786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-11 03:13:37.526800 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:13:37.526814 | orchestrator | 2026-04-11 03:13:37.526836 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-11 03:13:37.526851 | orchestrator | Saturday 11 April 2026 03:13:36 +0000 (0:00:00.801) 0:00:41.608 ******** 2026-04-11 03:13:37.526899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:14:04.948721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:14:04.948848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-11 03:14:04.948861 | orchestrator | 2026-04-11 03:14:04.948870 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-11 03:14:04.948880 | orchestrator | Saturday 11 April 2026 03:13:37 +0000 (0:00:01.213) 0:00:42.821 ******** 2026-04-11 03:14:04.948888 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:14:04.948916 | orchestrator | 2026-04-11 03:14:04.948923 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-11 03:14:04.948930 | orchestrator | Saturday 11 April 2026 03:13:39 +0000 (0:00:02.268) 0:00:45.090 ******** 2026-04-11 03:14:04.948939 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:14:04.948945 | orchestrator | 2026-04-11 03:14:04.948950 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-11 03:14:04.948954 | orchestrator | Saturday 11 April 2026 03:13:42 +0000 (0:00:02.255) 0:00:47.345 ******** 2026-04-11 03:14:04.948959 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:14:04.948965 | orchestrator | 2026-04-11 03:14:04.948973 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-11 03:14:04.948980 | orchestrator | Saturday 11 April 2026 03:13:56 +0000 (0:00:14.531) 0:01:01.877 ******** 2026-04-11 03:14:04.948986 | orchestrator | 2026-04-11 03:14:04.948993 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-11 03:14:04.949000 | orchestrator | Saturday 11 April 2026 03:13:56 +0000 (0:00:00.094) 0:01:01.972 ******** 2026-04-11 03:14:04.949007 | orchestrator | 2026-04-11 03:14:04.949014 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-11 03:14:04.949021 | orchestrator | Saturday 11 April 2026 03:13:56 +0000 (0:00:00.076) 0:01:02.048 ******** 2026-04-11 03:14:04.949028 | orchestrator | 2026-04-11 03:14:04.949036 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-11 03:14:04.949057 | orchestrator | Saturday 11 April 2026 03:13:56 +0000 (0:00:00.076) 0:01:02.125 ******** 2026-04-11 03:14:04.949065 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:14:04.949074 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:14:04.949079 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:14:04.949083 | orchestrator | 2026-04-11 03:14:04.949089 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:14:04.949098 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 03:14:04.949106 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 03:14:04.949114 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 03:14:04.949121 | orchestrator | 2026-04-11 03:14:04.949128 | orchestrator | 2026-04-11 03:14:04.949137 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:14:04.949154 | orchestrator | Saturday 11 April 2026 03:14:04 +0000 (0:00:07.716) 0:01:09.841 ******** 2026-04-11 03:14:04.949161 | orchestrator | =============================================================================== 2026-04-11 03:14:04.949166 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.53s 2026-04-11 03:14:04.949186 | orchestrator | placement : Restart placement-api container ----------------------------- 7.72s 2026-04-11 03:14:04.949191 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.57s 2026-04-11 03:14:04.949196 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.26s 2026-04-11 03:14:04.949200 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.23s 2026-04-11 03:14:04.949205 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.87s 2026-04-11 03:14:04.949210 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.75s 2026-04-11 03:14:04.949214 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.20s 2026-04-11 03:14:04.949219 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.78s 2026-04-11 03:14:04.949223 | orchestrator | placement : Creating placement databases -------------------------------- 2.27s 2026-04-11 03:14:04.949228 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.26s 2026-04-11 03:14:04.949232 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.74s 2026-04-11 03:14:04.949237 | orchestrator | placement : Copying over config.json files for services ----------------- 1.67s 2026-04-11 03:14:04.949241 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.45s 2026-04-11 03:14:04.949246 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.39s 2026-04-11 03:14:04.949250 | orchestrator | placement : Check placement containers ---------------------------------- 1.21s 2026-04-11 03:14:04.949255 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.12s 2026-04-11 03:14:04.949259 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.81s 2026-04-11 03:14:04.949265 | orchestrator | placement : Copying over existing policy file --------------------------- 0.80s 2026-04-11 03:14:04.949270 | orchestrator | placement : include_tasks ----------------------------------------------- 0.63s 2026-04-11 03:14:07.583658 | orchestrator | 2026-04-11 03:14:07 | INFO  | Task f73c3e82-9718-4567-96b3-f1245d2f4c23 (neutron) was prepared for execution. 2026-04-11 03:14:07.583755 | orchestrator | 2026-04-11 03:14:07 | INFO  | It takes a moment until task f73c3e82-9718-4567-96b3-f1245d2f4c23 (neutron) has been started and output is visible here. 2026-04-11 03:14:57.324019 | orchestrator | 2026-04-11 03:14:57.324124 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:14:57.324137 | orchestrator | 2026-04-11 03:14:57.324144 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:14:57.324150 | orchestrator | Saturday 11 April 2026 03:14:12 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-04-11 03:14:57.324157 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:14:57.324164 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:14:57.324170 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:14:57.324176 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:14:57.324182 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:14:57.324188 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:14:57.324193 | orchestrator | 2026-04-11 03:14:57.324199 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:14:57.324205 | orchestrator | Saturday 11 April 2026 03:14:13 +0000 (0:00:00.772) 0:00:01.058 ******** 2026-04-11 03:14:57.324211 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-11 03:14:57.324218 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-11 03:14:57.324223 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-11 03:14:57.324248 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-11 03:14:57.324254 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-11 03:14:57.324260 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-11 03:14:57.324266 | orchestrator | 2026-04-11 03:14:57.324272 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-11 03:14:57.324278 | orchestrator | 2026-04-11 03:14:57.324283 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 03:14:57.324300 | orchestrator | Saturday 11 April 2026 03:14:13 +0000 (0:00:00.710) 0:00:01.768 ******** 2026-04-11 03:14:57.324307 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:14:57.324314 | orchestrator | 2026-04-11 03:14:57.324320 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-11 03:14:57.324325 | orchestrator | Saturday 11 April 2026 03:14:15 +0000 (0:00:01.389) 0:00:03.158 ******** 2026-04-11 03:14:57.324331 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:14:57.324337 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:14:57.324343 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:14:57.324349 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:14:57.324355 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:14:57.324361 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:14:57.324366 | orchestrator | 2026-04-11 03:14:57.324372 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-11 03:14:57.324378 | orchestrator | Saturday 11 April 2026 03:14:16 +0000 (0:00:01.367) 0:00:04.525 ******** 2026-04-11 03:14:57.324384 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:14:57.324389 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:14:57.324395 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:14:57.324401 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:14:57.324406 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:14:57.324412 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:14:57.324418 | orchestrator | 2026-04-11 03:14:57.324424 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-11 03:14:57.324429 | orchestrator | Saturday 11 April 2026 03:14:17 +0000 (0:00:01.097) 0:00:05.623 ******** 2026-04-11 03:14:57.324435 | orchestrator | ok: [testbed-node-0] => { 2026-04-11 03:14:57.324442 | orchestrator |  "changed": false, 2026-04-11 03:14:57.324448 | orchestrator |  "msg": "All assertions passed" 2026-04-11 03:14:57.324454 | orchestrator | } 2026-04-11 03:14:57.324460 | orchestrator | ok: [testbed-node-1] => { 2026-04-11 03:14:57.324466 | orchestrator |  "changed": false, 2026-04-11 03:14:57.324473 | orchestrator |  "msg": "All assertions passed" 2026-04-11 03:14:57.324482 | orchestrator | } 2026-04-11 03:14:57.324490 | orchestrator | ok: [testbed-node-2] => { 2026-04-11 03:14:57.324505 | orchestrator |  "changed": false, 2026-04-11 03:14:57.324519 | orchestrator |  "msg": "All assertions passed" 2026-04-11 03:14:57.324528 | orchestrator | } 2026-04-11 03:14:57.324537 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 03:14:57.324545 | orchestrator |  "changed": false, 2026-04-11 03:14:57.324554 | orchestrator |  "msg": "All assertions passed" 2026-04-11 03:14:57.324563 | orchestrator | } 2026-04-11 03:14:57.324572 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 03:14:57.324582 | orchestrator |  "changed": false, 2026-04-11 03:14:57.324591 | orchestrator |  "msg": "All assertions passed" 2026-04-11 03:14:57.324602 | orchestrator | } 2026-04-11 03:14:57.324611 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 03:14:57.324621 | orchestrator |  "changed": false, 2026-04-11 03:14:57.324630 | orchestrator |  "msg": "All assertions passed" 2026-04-11 03:14:57.324640 | orchestrator | } 2026-04-11 03:14:57.324648 | orchestrator | 2026-04-11 03:14:57.324657 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-11 03:14:57.324667 | orchestrator | Saturday 11 April 2026 03:14:18 +0000 (0:00:00.915) 0:00:06.538 ******** 2026-04-11 03:14:57.324676 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:14:57.324697 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:14:57.324708 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:14:57.324718 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:14:57.324728 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:14:57.324738 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:14:57.324745 | orchestrator | 2026-04-11 03:14:57.324752 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-11 03:14:57.324758 | orchestrator | Saturday 11 April 2026 03:14:19 +0000 (0:00:00.695) 0:00:07.234 ******** 2026-04-11 03:14:57.324764 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-11 03:14:57.324770 | orchestrator | 2026-04-11 03:14:57.324775 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-11 03:14:57.324781 | orchestrator | Saturday 11 April 2026 03:14:23 +0000 (0:00:04.004) 0:00:11.238 ******** 2026-04-11 03:14:57.324787 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-11 03:14:57.324795 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-11 03:14:57.324801 | orchestrator | 2026-04-11 03:14:57.324821 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-11 03:14:57.324828 | orchestrator | Saturday 11 April 2026 03:14:29 +0000 (0:00:06.443) 0:00:17.681 ******** 2026-04-11 03:14:57.324834 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:14:57.324839 | orchestrator | 2026-04-11 03:14:57.324845 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-11 03:14:57.324851 | orchestrator | Saturday 11 April 2026 03:14:33 +0000 (0:00:03.338) 0:00:21.020 ******** 2026-04-11 03:14:57.324857 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:14:57.324863 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-11 03:14:57.324868 | orchestrator | 2026-04-11 03:14:57.324874 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-11 03:14:57.324880 | orchestrator | Saturday 11 April 2026 03:14:37 +0000 (0:00:04.039) 0:00:25.060 ******** 2026-04-11 03:14:57.324886 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:14:57.324891 | orchestrator | 2026-04-11 03:14:57.324897 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-11 03:14:57.324903 | orchestrator | Saturday 11 April 2026 03:14:40 +0000 (0:00:03.053) 0:00:28.113 ******** 2026-04-11 03:14:57.324909 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-11 03:14:57.324914 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-11 03:14:57.324920 | orchestrator | 2026-04-11 03:14:57.324952 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 03:14:57.324959 | orchestrator | Saturday 11 April 2026 03:14:47 +0000 (0:00:07.571) 0:00:35.685 ******** 2026-04-11 03:14:57.324965 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:14:57.324976 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:14:57.324982 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:14:57.324988 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:14:57.324994 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:14:57.325000 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:14:57.325005 | orchestrator | 2026-04-11 03:14:57.325011 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-11 03:14:57.325017 | orchestrator | Saturday 11 April 2026 03:14:48 +0000 (0:00:00.873) 0:00:36.558 ******** 2026-04-11 03:14:57.325032 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:14:57.325038 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:14:57.325044 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:14:57.325049 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:14:57.325055 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:14:57.325061 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:14:57.325072 | orchestrator | 2026-04-11 03:14:57.325078 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-11 03:14:57.325084 | orchestrator | Saturday 11 April 2026 03:14:50 +0000 (0:00:02.157) 0:00:38.716 ******** 2026-04-11 03:14:57.325090 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:14:57.325096 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:14:57.325101 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:14:57.325107 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:14:57.325113 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:14:57.325119 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:14:57.325125 | orchestrator | 2026-04-11 03:14:57.325130 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-11 03:14:57.325136 | orchestrator | Saturday 11 April 2026 03:14:52 +0000 (0:00:01.284) 0:00:40.000 ******** 2026-04-11 03:14:57.325142 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:14:57.325148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:14:57.325153 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:14:57.325159 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:14:57.325165 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:14:57.325171 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:14:57.325176 | orchestrator | 2026-04-11 03:14:57.325182 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-11 03:14:57.325188 | orchestrator | Saturday 11 April 2026 03:14:54 +0000 (0:00:02.429) 0:00:42.430 ******** 2026-04-11 03:14:57.325197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:14:57.325216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:03.310445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:03.310586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:03.310635 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:03.310643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:03.310651 | orchestrator | 2026-04-11 03:15:03.310659 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-11 03:15:03.310666 | orchestrator | Saturday 11 April 2026 03:14:57 +0000 (0:00:02.848) 0:00:45.279 ******** 2026-04-11 03:15:03.310672 | orchestrator | [WARNING]: Skipped 2026-04-11 03:15:03.310680 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-11 03:15:03.310687 | orchestrator | due to this access issue: 2026-04-11 03:15:03.310694 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-11 03:15:03.310699 | orchestrator | a directory 2026-04-11 03:15:03.310706 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:15:03.310711 | orchestrator | 2026-04-11 03:15:03.310717 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 03:15:03.310724 | orchestrator | Saturday 11 April 2026 03:14:58 +0000 (0:00:00.891) 0:00:46.170 ******** 2026-04-11 03:15:03.310735 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:15:03.310746 | orchestrator | 2026-04-11 03:15:03.310757 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-11 03:15:03.310784 | orchestrator | Saturday 11 April 2026 03:14:59 +0000 (0:00:01.459) 0:00:47.629 ******** 2026-04-11 03:15:03.310796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:03.310823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:03.310834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:03.310843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:03.310861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:08.649554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:08.649693 | orchestrator | 2026-04-11 03:15:08.649712 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-11 03:15:08.649726 | orchestrator | Saturday 11 April 2026 03:15:03 +0000 (0:00:03.636) 0:00:51.266 ******** 2026-04-11 03:15:08.649739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:08.649753 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:08.649765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:08.649777 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:08.649788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:08.649824 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:08.649867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:08.649889 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:08.649918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:08.649963 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:08.649984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:08.650002 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:08.650136 | orchestrator | 2026-04-11 03:15:08.650159 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-11 03:15:08.650177 | orchestrator | Saturday 11 April 2026 03:15:05 +0000 (0:00:02.201) 0:00:53.467 ******** 2026-04-11 03:15:08.650191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:08.650206 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:08.650232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:14.980909 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:14.981057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:14.981078 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:14.981088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:14.981098 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:14.981107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:14.981115 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:14.981124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:14.981153 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:14.981167 | orchestrator | 2026-04-11 03:15:14.981181 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-11 03:15:14.981197 | orchestrator | Saturday 11 April 2026 03:15:08 +0000 (0:00:03.136) 0:00:56.604 ******** 2026-04-11 03:15:14.981210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:14.981223 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:14.981233 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:14.981241 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:14.981248 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:14.981256 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:14.981264 | orchestrator | 2026-04-11 03:15:14.981272 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-11 03:15:14.981280 | orchestrator | Saturday 11 April 2026 03:15:11 +0000 (0:00:02.882) 0:00:59.487 ******** 2026-04-11 03:15:14.981287 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:14.981295 | orchestrator | 2026-04-11 03:15:14.981303 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-11 03:15:14.981325 | orchestrator | Saturday 11 April 2026 03:15:11 +0000 (0:00:00.185) 0:00:59.672 ******** 2026-04-11 03:15:14.981334 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:14.981342 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:14.981350 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:14.981357 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:14.981365 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:14.981373 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:14.981381 | orchestrator | 2026-04-11 03:15:14.981389 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-11 03:15:14.981396 | orchestrator | Saturday 11 April 2026 03:15:12 +0000 (0:00:00.695) 0:01:00.368 ******** 2026-04-11 03:15:14.981410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:14.981419 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:14.981428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:14.981443 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:14.981453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:14.981463 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:14.981473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:14.981482 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:14.981503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:24.174218 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:24.174317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:24.174334 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:24.174342 | orchestrator | 2026-04-11 03:15:24.174348 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-11 03:15:24.174354 | orchestrator | Saturday 11 April 2026 03:15:14 +0000 (0:00:02.563) 0:01:02.931 ******** 2026-04-11 03:15:24.174359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:24.174392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:24.174397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:24.174424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:24.174428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:24.174437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:24.174441 | orchestrator | 2026-04-11 03:15:24.174445 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-11 03:15:24.174449 | orchestrator | Saturday 11 April 2026 03:15:18 +0000 (0:00:03.382) 0:01:06.314 ******** 2026-04-11 03:15:24.174453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:24.174457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:24.174469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:29.216834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:29.217060 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:29.217087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:15:29.217097 | orchestrator | 2026-04-11 03:15:29.217108 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-11 03:15:29.217122 | orchestrator | Saturday 11 April 2026 03:15:24 +0000 (0:00:05.817) 0:01:12.132 ******** 2026-04-11 03:15:29.217150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:29.217162 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:29.217196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:29.217219 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:29.217231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:29.217241 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:29.217252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:29.217263 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:29.217274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:29.217289 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:29.217305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:29.217316 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:29.217336 | orchestrator | 2026-04-11 03:15:29.217347 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-11 03:15:29.217360 | orchestrator | Saturday 11 April 2026 03:15:26 +0000 (0:00:02.391) 0:01:14.523 ******** 2026-04-11 03:15:29.217372 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:29.217385 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:29.217399 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:29.217412 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:15:29.217422 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:15:29.217437 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:15:50.199423 | orchestrator | 2026-04-11 03:15:50.199527 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-11 03:15:50.199542 | orchestrator | Saturday 11 April 2026 03:15:29 +0000 (0:00:02.644) 0:01:17.168 ******** 2026-04-11 03:15:50.199553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:50.199562 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:50.199571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:50.199578 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:50.199587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:50.199594 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:50.199617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:50.199663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:50.199673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:15:50.199679 | orchestrator | 2026-04-11 03:15:50.199686 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-11 03:15:50.199693 | orchestrator | Saturday 11 April 2026 03:15:32 +0000 (0:00:03.632) 0:01:20.801 ******** 2026-04-11 03:15:50.199699 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:50.199705 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:50.199711 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:50.199718 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:50.199724 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:50.199731 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:50.199737 | orchestrator | 2026-04-11 03:15:50.199745 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-11 03:15:50.199752 | orchestrator | Saturday 11 April 2026 03:15:35 +0000 (0:00:02.383) 0:01:23.184 ******** 2026-04-11 03:15:50.199758 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:50.199765 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:50.199771 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:50.199778 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:50.199784 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:50.199791 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:50.199798 | orchestrator | 2026-04-11 03:15:50.199806 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-11 03:15:50.199813 | orchestrator | Saturday 11 April 2026 03:15:37 +0000 (0:00:02.301) 0:01:25.485 ******** 2026-04-11 03:15:50.199820 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:50.199826 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:50.199832 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:50.199839 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:50.199845 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:50.199860 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:50.199867 | orchestrator | 2026-04-11 03:15:50.199873 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-11 03:15:50.199880 | orchestrator | Saturday 11 April 2026 03:15:40 +0000 (0:00:02.807) 0:01:28.292 ******** 2026-04-11 03:15:50.199887 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:50.199893 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:50.199900 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:50.199908 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:50.199915 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:50.199921 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:50.199927 | orchestrator | 2026-04-11 03:15:50.199934 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-11 03:15:50.199941 | orchestrator | Saturday 11 April 2026 03:15:42 +0000 (0:00:02.504) 0:01:30.797 ******** 2026-04-11 03:15:50.199947 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:50.199955 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:50.199990 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:50.199997 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:50.200004 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:50.200012 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:50.200018 | orchestrator | 2026-04-11 03:15:50.200026 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-11 03:15:50.200033 | orchestrator | Saturday 11 April 2026 03:15:45 +0000 (0:00:02.531) 0:01:33.329 ******** 2026-04-11 03:15:50.200040 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:50.200053 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:50.200061 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:50.200068 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:50.200075 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:50.200083 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:50.200089 | orchestrator | 2026-04-11 03:15:50.200096 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-11 03:15:50.200103 | orchestrator | Saturday 11 April 2026 03:15:47 +0000 (0:00:02.415) 0:01:35.745 ******** 2026-04-11 03:15:50.200111 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 03:15:50.200118 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:50.200128 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 03:15:50.200135 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:50.200142 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 03:15:50.200156 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:55.360380 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 03:15:55.360496 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:55.360512 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 03:15:55.360523 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:55.360533 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 03:15:55.360543 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:55.360552 | orchestrator | 2026-04-11 03:15:55.360563 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-11 03:15:55.360574 | orchestrator | Saturday 11 April 2026 03:15:50 +0000 (0:00:02.405) 0:01:38.150 ******** 2026-04-11 03:15:55.360587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:55.360624 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:55.360636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:55.360646 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:15:55.360671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:55.360682 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:15:55.360711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:55.360723 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:15:55.360733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:55.360752 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:15:55.360769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:55.360785 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:55.360801 | orchestrator | 2026-04-11 03:15:55.360817 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-11 03:15:55.360832 | orchestrator | Saturday 11 April 2026 03:15:52 +0000 (0:00:02.534) 0:01:40.684 ******** 2026-04-11 03:15:55.360849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:15:55.360864 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:15:55.360886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:15:55.360900 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:15:55.360923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:16:25.354316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:16:25.354482 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.354498 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.354509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:16:25.354519 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.354529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:16:25.354538 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.354547 | orchestrator | 2026-04-11 03:16:25.354557 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-11 03:16:25.354567 | orchestrator | Saturday 11 April 2026 03:15:55 +0000 (0:00:02.632) 0:01:43.316 ******** 2026-04-11 03:16:25.354576 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.354585 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.354594 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.354615 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.354625 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.354634 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.354642 | orchestrator | 2026-04-11 03:16:25.354651 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-11 03:16:25.354666 | orchestrator | Saturday 11 April 2026 03:15:57 +0000 (0:00:02.434) 0:01:45.750 ******** 2026-04-11 03:16:25.354682 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.354696 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.354711 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.354727 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:16:25.354743 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:16:25.354759 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:16:25.354787 | orchestrator | 2026-04-11 03:16:25.354802 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-11 03:16:25.354812 | orchestrator | Saturday 11 April 2026 03:16:01 +0000 (0:00:04.060) 0:01:49.811 ******** 2026-04-11 03:16:25.354820 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.354829 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.354837 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.354846 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.354856 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.354866 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.354876 | orchestrator | 2026-04-11 03:16:25.354885 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-11 03:16:25.354895 | orchestrator | Saturday 11 April 2026 03:16:04 +0000 (0:00:02.541) 0:01:52.352 ******** 2026-04-11 03:16:25.354921 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.354932 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.354942 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.354952 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.354961 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.354971 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.355036 | orchestrator | 2026-04-11 03:16:25.355049 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-11 03:16:25.355060 | orchestrator | Saturday 11 April 2026 03:16:07 +0000 (0:00:02.802) 0:01:55.155 ******** 2026-04-11 03:16:25.355071 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.355080 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.355089 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.355097 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.355106 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.355115 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.355123 | orchestrator | 2026-04-11 03:16:25.355132 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-11 03:16:25.355141 | orchestrator | Saturday 11 April 2026 03:16:09 +0000 (0:00:02.458) 0:01:57.614 ******** 2026-04-11 03:16:25.355149 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.355157 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.355166 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.355175 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.355183 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.355192 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.355200 | orchestrator | 2026-04-11 03:16:25.355209 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-11 03:16:25.355218 | orchestrator | Saturday 11 April 2026 03:16:12 +0000 (0:00:02.619) 0:02:00.233 ******** 2026-04-11 03:16:25.355227 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.355235 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.355244 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.355253 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.355261 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.355270 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.355279 | orchestrator | 2026-04-11 03:16:25.355287 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-11 03:16:25.355296 | orchestrator | Saturday 11 April 2026 03:16:14 +0000 (0:00:02.627) 0:02:02.861 ******** 2026-04-11 03:16:25.355304 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.355313 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.355321 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.355330 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.355338 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.355347 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.355355 | orchestrator | 2026-04-11 03:16:25.355364 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-11 03:16:25.355384 | orchestrator | Saturday 11 April 2026 03:16:17 +0000 (0:00:02.403) 0:02:05.264 ******** 2026-04-11 03:16:25.355393 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.355401 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.355410 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.355418 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.355427 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.355435 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.355444 | orchestrator | 2026-04-11 03:16:25.355453 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-11 03:16:25.355461 | orchestrator | Saturday 11 April 2026 03:16:19 +0000 (0:00:02.611) 0:02:07.876 ******** 2026-04-11 03:16:25.355470 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 03:16:25.355479 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:25.355488 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 03:16:25.355497 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:25.355505 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 03:16:25.355514 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:25.355523 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 03:16:25.355532 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:25.355541 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 03:16:25.355550 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:25.355565 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 03:16:25.355574 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:25.355582 | orchestrator | 2026-04-11 03:16:25.355591 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-11 03:16:25.355600 | orchestrator | Saturday 11 April 2026 03:16:22 +0000 (0:00:02.282) 0:02:10.159 ******** 2026-04-11 03:16:25.355617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:16:28.221249 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:16:28.221371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:16:28.221426 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:16:28.221444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:16:28.221457 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:16:28.221470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-11 03:16:28.221484 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:16:28.221514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:16:28.221528 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:16:28.221566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 03:16:28.221580 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:16:28.221594 | orchestrator | 2026-04-11 03:16:28.221609 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-11 03:16:28.221624 | orchestrator | Saturday 11 April 2026 03:16:25 +0000 (0:00:03.149) 0:02:13.308 ******** 2026-04-11 03:16:28.221638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:16:28.221662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:16:28.221681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-11 03:16:28.221695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:16:28.221718 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:18:51.629863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 03:18:51.629989 | orchestrator | 2026-04-11 03:18:51.630003 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 03:18:51.630011 | orchestrator | Saturday 11 April 2026 03:16:28 +0000 (0:00:02.867) 0:02:16.176 ******** 2026-04-11 03:18:51.630063 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:18:51.630072 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:18:51.630078 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:18:51.630084 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:18:51.630090 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:18:51.630097 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:18:51.630103 | orchestrator | 2026-04-11 03:18:51.630113 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-11 03:18:51.630125 | orchestrator | Saturday 11 April 2026 03:16:29 +0000 (0:00:00.936) 0:02:17.113 ******** 2026-04-11 03:18:51.630140 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:18:51.630150 | orchestrator | 2026-04-11 03:18:51.630162 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-11 03:18:51.630172 | orchestrator | Saturday 11 April 2026 03:16:31 +0000 (0:00:02.024) 0:02:19.138 ******** 2026-04-11 03:18:51.630188 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:18:51.630197 | orchestrator | 2026-04-11 03:18:51.630207 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-11 03:18:51.630218 | orchestrator | Saturday 11 April 2026 03:16:33 +0000 (0:00:02.038) 0:02:21.177 ******** 2026-04-11 03:18:51.630227 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:18:51.630236 | orchestrator | 2026-04-11 03:18:51.630246 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 03:18:51.630256 | orchestrator | Saturday 11 April 2026 03:17:19 +0000 (0:00:46.769) 0:03:07.946 ******** 2026-04-11 03:18:51.630266 | orchestrator | 2026-04-11 03:18:51.630276 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 03:18:51.630287 | orchestrator | Saturday 11 April 2026 03:17:20 +0000 (0:00:00.073) 0:03:08.020 ******** 2026-04-11 03:18:51.630297 | orchestrator | 2026-04-11 03:18:51.630308 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 03:18:51.630318 | orchestrator | Saturday 11 April 2026 03:17:20 +0000 (0:00:00.075) 0:03:08.095 ******** 2026-04-11 03:18:51.630329 | orchestrator | 2026-04-11 03:18:51.630337 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 03:18:51.630356 | orchestrator | Saturday 11 April 2026 03:17:20 +0000 (0:00:00.069) 0:03:08.165 ******** 2026-04-11 03:18:51.630363 | orchestrator | 2026-04-11 03:18:51.630370 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 03:18:51.630377 | orchestrator | Saturday 11 April 2026 03:17:20 +0000 (0:00:00.072) 0:03:08.237 ******** 2026-04-11 03:18:51.630384 | orchestrator | 2026-04-11 03:18:51.630392 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 03:18:51.630399 | orchestrator | Saturday 11 April 2026 03:17:20 +0000 (0:00:00.093) 0:03:08.331 ******** 2026-04-11 03:18:51.630406 | orchestrator | 2026-04-11 03:18:51.630430 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-11 03:18:51.630439 | orchestrator | Saturday 11 April 2026 03:17:20 +0000 (0:00:00.082) 0:03:08.413 ******** 2026-04-11 03:18:51.630449 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:18:51.630459 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:18:51.630468 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:18:51.630484 | orchestrator | 2026-04-11 03:18:51.630495 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-11 03:18:51.630505 | orchestrator | Saturday 11 April 2026 03:17:45 +0000 (0:00:25.318) 0:03:33.732 ******** 2026-04-11 03:18:51.630514 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:18:51.630524 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:18:51.630533 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:18:51.630542 | orchestrator | 2026-04-11 03:18:51.630552 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:18:51.630564 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 03:18:51.630576 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-11 03:18:51.630586 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-11 03:18:51.630596 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 03:18:51.630626 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 03:18:51.630634 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-11 03:18:51.630640 | orchestrator | 2026-04-11 03:18:51.630646 | orchestrator | 2026-04-11 03:18:51.630652 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:18:51.630658 | orchestrator | Saturday 11 April 2026 03:18:51 +0000 (0:01:05.276) 0:04:39.008 ******** 2026-04-11 03:18:51.630665 | orchestrator | =============================================================================== 2026-04-11 03:18:51.630671 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 65.28s 2026-04-11 03:18:51.630677 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.77s 2026-04-11 03:18:51.630683 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.32s 2026-04-11 03:18:51.630689 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.57s 2026-04-11 03:18:51.630695 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.44s 2026-04-11 03:18:51.630701 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.82s 2026-04-11 03:18:51.630707 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.06s 2026-04-11 03:18:51.630715 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.04s 2026-04-11 03:18:51.630725 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.00s 2026-04-11 03:18:51.630735 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.64s 2026-04-11 03:18:51.630743 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.63s 2026-04-11 03:18:51.630753 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.38s 2026-04-11 03:18:51.630762 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.34s 2026-04-11 03:18:51.630771 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.15s 2026-04-11 03:18:51.630780 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.14s 2026-04-11 03:18:51.630850 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.05s 2026-04-11 03:18:51.630863 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 2.88s 2026-04-11 03:18:51.630874 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.87s 2026-04-11 03:18:51.630884 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.85s 2026-04-11 03:18:51.630895 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 2.81s 2026-04-11 03:18:54.230094 | orchestrator | 2026-04-11 03:18:54 | INFO  | Task 62c2b8cb-05ad-4c7b-8eb0-89363310ba2c (nova) was prepared for execution. 2026-04-11 03:18:54.230199 | orchestrator | 2026-04-11 03:18:54 | INFO  | It takes a moment until task 62c2b8cb-05ad-4c7b-8eb0-89363310ba2c (nova) has been started and output is visible here. 2026-04-11 03:20:56.777689 | orchestrator | 2026-04-11 03:20:56.777802 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:20:56.777819 | orchestrator | 2026-04-11 03:20:56.777830 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-11 03:20:56.777841 | orchestrator | Saturday 11 April 2026 03:18:59 +0000 (0:00:00.332) 0:00:00.332 ******** 2026-04-11 03:20:56.777852 | orchestrator | changed: [testbed-manager] 2026-04-11 03:20:56.777863 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.777874 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:20:56.777884 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:20:56.777894 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:20:56.777919 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:20:56.777939 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:20:56.777949 | orchestrator | 2026-04-11 03:20:56.777959 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:20:56.777969 | orchestrator | Saturday 11 April 2026 03:19:00 +0000 (0:00:01.011) 0:00:01.344 ******** 2026-04-11 03:20:56.777979 | orchestrator | changed: [testbed-manager] 2026-04-11 03:20:56.777989 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.777999 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:20:56.778009 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:20:56.778072 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:20:56.778083 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:20:56.778094 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:20:56.778103 | orchestrator | 2026-04-11 03:20:56.778142 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:20:56.778153 | orchestrator | Saturday 11 April 2026 03:19:01 +0000 (0:00:01.005) 0:00:02.350 ******** 2026-04-11 03:20:56.778163 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-11 03:20:56.778174 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-11 03:20:56.778186 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-11 03:20:56.778197 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-11 03:20:56.778208 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-11 03:20:56.778220 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-11 03:20:56.778231 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-11 03:20:56.778242 | orchestrator | 2026-04-11 03:20:56.778253 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-11 03:20:56.778265 | orchestrator | 2026-04-11 03:20:56.778275 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-11 03:20:56.778285 | orchestrator | Saturday 11 April 2026 03:19:01 +0000 (0:00:00.861) 0:00:03.211 ******** 2026-04-11 03:20:56.778295 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:20:56.778305 | orchestrator | 2026-04-11 03:20:56.778315 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-11 03:20:56.778348 | orchestrator | Saturday 11 April 2026 03:19:02 +0000 (0:00:00.794) 0:00:04.005 ******** 2026-04-11 03:20:56.778359 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-11 03:20:56.778369 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-11 03:20:56.778379 | orchestrator | 2026-04-11 03:20:56.778389 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-11 03:20:56.778399 | orchestrator | Saturday 11 April 2026 03:19:06 +0000 (0:00:04.159) 0:00:08.164 ******** 2026-04-11 03:20:56.778408 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 03:20:56.778418 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 03:20:56.778428 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.778438 | orchestrator | 2026-04-11 03:20:56.778447 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-11 03:20:56.778457 | orchestrator | Saturday 11 April 2026 03:19:11 +0000 (0:00:04.116) 0:00:12.281 ******** 2026-04-11 03:20:56.778467 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.778482 | orchestrator | 2026-04-11 03:20:56.778499 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-11 03:20:56.778517 | orchestrator | Saturday 11 April 2026 03:19:11 +0000 (0:00:00.707) 0:00:12.989 ******** 2026-04-11 03:20:56.778533 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.778559 | orchestrator | 2026-04-11 03:20:56.778578 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-11 03:20:56.778594 | orchestrator | Saturday 11 April 2026 03:19:13 +0000 (0:00:01.271) 0:00:14.261 ******** 2026-04-11 03:20:56.778723 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.778746 | orchestrator | 2026-04-11 03:20:56.778761 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 03:20:56.778776 | orchestrator | Saturday 11 April 2026 03:19:15 +0000 (0:00:02.721) 0:00:16.983 ******** 2026-04-11 03:20:56.778796 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:20:56.778817 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.778832 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.778848 | orchestrator | 2026-04-11 03:20:56.778862 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-11 03:20:56.778878 | orchestrator | Saturday 11 April 2026 03:19:16 +0000 (0:00:00.410) 0:00:17.393 ******** 2026-04-11 03:20:56.778893 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:20:56.778908 | orchestrator | 2026-04-11 03:20:56.779027 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-11 03:20:56.779063 | orchestrator | Saturday 11 April 2026 03:19:50 +0000 (0:00:34.724) 0:00:52.118 ******** 2026-04-11 03:20:56.779080 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.779098 | orchestrator | 2026-04-11 03:20:56.779118 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-11 03:20:56.779134 | orchestrator | Saturday 11 April 2026 03:20:05 +0000 (0:00:14.729) 0:01:06.847 ******** 2026-04-11 03:20:56.779155 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:20:56.779180 | orchestrator | 2026-04-11 03:20:56.779199 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-11 03:20:56.779242 | orchestrator | Saturday 11 April 2026 03:20:17 +0000 (0:00:11.875) 0:01:18.723 ******** 2026-04-11 03:20:56.779293 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:20:56.779310 | orchestrator | 2026-04-11 03:20:56.779327 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-11 03:20:56.779344 | orchestrator | Saturday 11 April 2026 03:20:18 +0000 (0:00:00.804) 0:01:19.527 ******** 2026-04-11 03:20:56.779362 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:20:56.779380 | orchestrator | 2026-04-11 03:20:56.779402 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 03:20:56.779428 | orchestrator | Saturday 11 April 2026 03:20:18 +0000 (0:00:00.526) 0:01:20.054 ******** 2026-04-11 03:20:56.779446 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:20:56.779483 | orchestrator | 2026-04-11 03:20:56.779500 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-11 03:20:56.779519 | orchestrator | Saturday 11 April 2026 03:20:19 +0000 (0:00:00.758) 0:01:20.812 ******** 2026-04-11 03:20:56.779547 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:20:56.779567 | orchestrator | 2026-04-11 03:20:56.779584 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-11 03:20:56.779602 | orchestrator | Saturday 11 April 2026 03:20:37 +0000 (0:00:18.059) 0:01:38.871 ******** 2026-04-11 03:20:56.779652 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:20:56.779670 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.779686 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.779704 | orchestrator | 2026-04-11 03:20:56.779722 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-11 03:20:56.779745 | orchestrator | 2026-04-11 03:20:56.779770 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-11 03:20:56.779788 | orchestrator | Saturday 11 April 2026 03:20:38 +0000 (0:00:00.405) 0:01:39.277 ******** 2026-04-11 03:20:56.779806 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:20:56.779824 | orchestrator | 2026-04-11 03:20:56.779841 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-11 03:20:56.779859 | orchestrator | Saturday 11 April 2026 03:20:38 +0000 (0:00:00.849) 0:01:40.126 ******** 2026-04-11 03:20:56.779886 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.779908 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.779926 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.779943 | orchestrator | 2026-04-11 03:20:56.779962 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-11 03:20:56.779990 | orchestrator | Saturday 11 April 2026 03:20:40 +0000 (0:00:02.014) 0:01:42.141 ******** 2026-04-11 03:20:56.780010 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.780027 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.780045 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.780062 | orchestrator | 2026-04-11 03:20:56.780081 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-11 03:20:56.780105 | orchestrator | Saturday 11 April 2026 03:20:43 +0000 (0:00:02.121) 0:01:44.262 ******** 2026-04-11 03:20:56.780129 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:20:56.780146 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.780163 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.780181 | orchestrator | 2026-04-11 03:20:56.780198 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-11 03:20:56.780216 | orchestrator | Saturday 11 April 2026 03:20:43 +0000 (0:00:00.588) 0:01:44.851 ******** 2026-04-11 03:20:56.780234 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 03:20:56.780250 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.780279 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 03:20:56.780299 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.780323 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 03:20:56.780347 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-11 03:20:56.780365 | orchestrator | 2026-04-11 03:20:56.780383 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-11 03:20:56.780401 | orchestrator | Saturday 11 April 2026 03:20:50 +0000 (0:00:07.345) 0:01:52.197 ******** 2026-04-11 03:20:56.780420 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:20:56.780440 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.780458 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.780475 | orchestrator | 2026-04-11 03:20:56.780492 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-11 03:20:56.780503 | orchestrator | Saturday 11 April 2026 03:20:51 +0000 (0:00:00.397) 0:01:52.594 ******** 2026-04-11 03:20:56.780514 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-11 03:20:56.780538 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:20:56.780549 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 03:20:56.780560 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.780571 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 03:20:56.780582 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.780592 | orchestrator | 2026-04-11 03:20:56.780603 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-11 03:20:56.780638 | orchestrator | Saturday 11 April 2026 03:20:52 +0000 (0:00:01.270) 0:01:53.864 ******** 2026-04-11 03:20:56.780649 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.780660 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.780671 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.780682 | orchestrator | 2026-04-11 03:20:56.780693 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-11 03:20:56.780703 | orchestrator | Saturday 11 April 2026 03:20:53 +0000 (0:00:00.485) 0:01:54.350 ******** 2026-04-11 03:20:56.780714 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.780725 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.780736 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:20:56.780747 | orchestrator | 2026-04-11 03:20:56.780757 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-11 03:20:56.780768 | orchestrator | Saturday 11 April 2026 03:20:54 +0000 (0:00:01.008) 0:01:55.358 ******** 2026-04-11 03:20:56.780781 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:20:56.780791 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:20:56.780815 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:22:16.749319 | orchestrator | 2026-04-11 03:22:16.749459 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-11 03:22:16.749481 | orchestrator | Saturday 11 April 2026 03:20:56 +0000 (0:00:02.618) 0:01:57.977 ******** 2026-04-11 03:22:16.749493 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:16.749505 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:16.749516 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:22:16.749557 | orchestrator | 2026-04-11 03:22:16.749568 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-11 03:22:16.749580 | orchestrator | Saturday 11 April 2026 03:21:19 +0000 (0:00:23.210) 0:02:21.187 ******** 2026-04-11 03:22:16.749591 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:16.749602 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:16.749614 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:22:16.749625 | orchestrator | 2026-04-11 03:22:16.749636 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-11 03:22:16.749647 | orchestrator | Saturday 11 April 2026 03:21:32 +0000 (0:00:12.186) 0:02:33.374 ******** 2026-04-11 03:22:16.749658 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:22:16.749669 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:16.749680 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:16.749691 | orchestrator | 2026-04-11 03:22:16.749702 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-11 03:22:16.749713 | orchestrator | Saturday 11 April 2026 03:21:33 +0000 (0:00:01.252) 0:02:34.627 ******** 2026-04-11 03:22:16.749725 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:16.749737 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:16.749758 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:22:16.749784 | orchestrator | 2026-04-11 03:22:16.749806 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-11 03:22:16.749825 | orchestrator | Saturday 11 April 2026 03:21:45 +0000 (0:00:11.986) 0:02:46.613 ******** 2026-04-11 03:22:16.749845 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:16.749865 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:16.749884 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:16.749902 | orchestrator | 2026-04-11 03:22:16.749923 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-11 03:22:16.749981 | orchestrator | Saturday 11 April 2026 03:21:46 +0000 (0:00:01.157) 0:02:47.771 ******** 2026-04-11 03:22:16.750004 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:16.750078 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:16.750090 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:16.750101 | orchestrator | 2026-04-11 03:22:16.750112 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-11 03:22:16.750123 | orchestrator | 2026-04-11 03:22:16.750134 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 03:22:16.750144 | orchestrator | Saturday 11 April 2026 03:21:46 +0000 (0:00:00.387) 0:02:48.158 ******** 2026-04-11 03:22:16.750212 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:22:16.750226 | orchestrator | 2026-04-11 03:22:16.750237 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-11 03:22:16.750248 | orchestrator | Saturday 11 April 2026 03:21:47 +0000 (0:00:00.853) 0:02:49.011 ******** 2026-04-11 03:22:16.750259 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-11 03:22:16.750270 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-11 03:22:16.750281 | orchestrator | 2026-04-11 03:22:16.750292 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-11 03:22:16.750303 | orchestrator | Saturday 11 April 2026 03:21:51 +0000 (0:00:03.273) 0:02:52.285 ******** 2026-04-11 03:22:16.750314 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-11 03:22:16.750327 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-11 03:22:16.750338 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-11 03:22:16.750350 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-11 03:22:16.750361 | orchestrator | 2026-04-11 03:22:16.750372 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-11 03:22:16.750383 | orchestrator | Saturday 11 April 2026 03:21:57 +0000 (0:00:06.356) 0:02:58.641 ******** 2026-04-11 03:22:16.750394 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:22:16.750405 | orchestrator | 2026-04-11 03:22:16.750416 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-11 03:22:16.750427 | orchestrator | Saturday 11 April 2026 03:22:00 +0000 (0:00:03.142) 0:03:01.784 ******** 2026-04-11 03:22:16.750437 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:22:16.750448 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-11 03:22:16.750459 | orchestrator | 2026-04-11 03:22:16.750470 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-11 03:22:16.750481 | orchestrator | Saturday 11 April 2026 03:22:04 +0000 (0:00:03.749) 0:03:05.534 ******** 2026-04-11 03:22:16.750492 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:22:16.750502 | orchestrator | 2026-04-11 03:22:16.750513 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-11 03:22:16.750559 | orchestrator | Saturday 11 April 2026 03:22:07 +0000 (0:00:03.258) 0:03:08.792 ******** 2026-04-11 03:22:16.750580 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-11 03:22:16.750602 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-11 03:22:16.750621 | orchestrator | 2026-04-11 03:22:16.750643 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-11 03:22:16.750682 | orchestrator | Saturday 11 April 2026 03:22:15 +0000 (0:00:07.740) 0:03:16.532 ******** 2026-04-11 03:22:16.750707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:16.750751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:16.750772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:16.750812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:21.598613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:21.598705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:21.598717 | orchestrator | 2026-04-11 03:22:21.598726 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-11 03:22:21.598735 | orchestrator | Saturday 11 April 2026 03:22:16 +0000 (0:00:01.416) 0:03:17.949 ******** 2026-04-11 03:22:21.598742 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:21.598755 | orchestrator | 2026-04-11 03:22:21.598767 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-11 03:22:21.598780 | orchestrator | Saturday 11 April 2026 03:22:16 +0000 (0:00:00.144) 0:03:18.094 ******** 2026-04-11 03:22:21.598790 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:21.598802 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:21.598813 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:21.598824 | orchestrator | 2026-04-11 03:22:21.598835 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-11 03:22:21.598848 | orchestrator | Saturday 11 April 2026 03:22:17 +0000 (0:00:00.376) 0:03:18.470 ******** 2026-04-11 03:22:21.598861 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:22:21.598874 | orchestrator | 2026-04-11 03:22:21.598886 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-11 03:22:21.598894 | orchestrator | Saturday 11 April 2026 03:22:18 +0000 (0:00:00.747) 0:03:19.218 ******** 2026-04-11 03:22:21.598901 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:21.598908 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:21.598916 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:21.598923 | orchestrator | 2026-04-11 03:22:21.598930 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 03:22:21.598938 | orchestrator | Saturday 11 April 2026 03:22:18 +0000 (0:00:00.568) 0:03:19.787 ******** 2026-04-11 03:22:21.598945 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:22:21.598954 | orchestrator | 2026-04-11 03:22:21.598961 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-11 03:22:21.598969 | orchestrator | Saturday 11 April 2026 03:22:19 +0000 (0:00:00.621) 0:03:20.408 ******** 2026-04-11 03:22:21.598994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:21.599039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:21.599050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:21.599058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:21.599067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:21.599085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:21.599094 | orchestrator | 2026-04-11 03:22:21.599105 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-11 03:22:23.554979 | orchestrator | Saturday 11 April 2026 03:22:21 +0000 (0:00:02.390) 0:03:22.799 ******** 2026-04-11 03:22:23.555093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:23.555116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:23.555130 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:23.555147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:23.555203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:23.555217 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:23.555251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:23.555266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:23.555279 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:23.555291 | orchestrator | 2026-04-11 03:22:23.555304 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-11 03:22:23.555318 | orchestrator | Saturday 11 April 2026 03:22:22 +0000 (0:00:00.933) 0:03:23.732 ******** 2026-04-11 03:22:23.555331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:23.555355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:23.555369 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:23.555398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:26.003835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:26.003929 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:26.003943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:26.003977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:26.003986 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:26.003993 | orchestrator | 2026-04-11 03:22:26.004002 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-11 03:22:26.004010 | orchestrator | Saturday 11 April 2026 03:22:23 +0000 (0:00:01.025) 0:03:24.757 ******** 2026-04-11 03:22:26.004030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:26.004054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:26.004063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:26.004083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:26.004091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:26.004104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:33.064920 | orchestrator | 2026-04-11 03:22:33.065009 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-11 03:22:33.065020 | orchestrator | Saturday 11 April 2026 03:22:25 +0000 (0:00:02.443) 0:03:27.201 ******** 2026-04-11 03:22:33.065031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:33.065062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:33.065083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:33.065105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:33.065115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:33.065130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:33.065136 | orchestrator | 2026-04-11 03:22:33.065143 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-11 03:22:33.065150 | orchestrator | Saturday 11 April 2026 03:22:32 +0000 (0:00:06.382) 0:03:33.584 ******** 2026-04-11 03:22:33.065160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:33.065167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:33.065174 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:33.065189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:37.589057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:37.589179 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:37.589207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-11 03:22:37.589241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:22:37.589255 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:37.589268 | orchestrator | 2026-04-11 03:22:37.589282 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-11 03:22:37.589296 | orchestrator | Saturday 11 April 2026 03:22:33 +0000 (0:00:00.687) 0:03:34.271 ******** 2026-04-11 03:22:37.589308 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:22:37.589321 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:22:37.589333 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:22:37.589345 | orchestrator | 2026-04-11 03:22:37.589357 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-11 03:22:37.589371 | orchestrator | Saturday 11 April 2026 03:22:34 +0000 (0:00:01.555) 0:03:35.827 ******** 2026-04-11 03:22:37.589384 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:22:37.589395 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:22:37.589408 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:22:37.589420 | orchestrator | 2026-04-11 03:22:37.589432 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-11 03:22:37.589444 | orchestrator | Saturday 11 April 2026 03:22:34 +0000 (0:00:00.354) 0:03:36.181 ******** 2026-04-11 03:22:37.589596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:37.589623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:37.589649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-11 03:22:37.589666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:37.589692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:22:37.589718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:18.985034 | orchestrator | 2026-04-11 03:23:18.985116 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 03:23:18.985124 | orchestrator | Saturday 11 April 2026 03:22:37 +0000 (0:00:02.129) 0:03:38.310 ******** 2026-04-11 03:23:18.985128 | orchestrator | 2026-04-11 03:23:18.985133 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 03:23:18.985138 | orchestrator | Saturday 11 April 2026 03:22:37 +0000 (0:00:00.163) 0:03:38.474 ******** 2026-04-11 03:23:18.985142 | orchestrator | 2026-04-11 03:23:18.985146 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 03:23:18.985150 | orchestrator | Saturday 11 April 2026 03:22:37 +0000 (0:00:00.150) 0:03:38.625 ******** 2026-04-11 03:23:18.985154 | orchestrator | 2026-04-11 03:23:18.985158 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-11 03:23:18.985162 | orchestrator | Saturday 11 April 2026 03:22:37 +0000 (0:00:00.161) 0:03:38.786 ******** 2026-04-11 03:23:18.985166 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:23:18.985171 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:23:18.985175 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:23:18.985179 | orchestrator | 2026-04-11 03:23:18.985183 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-11 03:23:18.985187 | orchestrator | Saturday 11 April 2026 03:23:00 +0000 (0:00:23.205) 0:04:01.992 ******** 2026-04-11 03:23:18.985191 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:23:18.985195 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:23:18.985199 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:23:18.985202 | orchestrator | 2026-04-11 03:23:18.985206 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-11 03:23:18.985210 | orchestrator | 2026-04-11 03:23:18.985214 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 03:23:18.985218 | orchestrator | Saturday 11 April 2026 03:23:06 +0000 (0:00:05.710) 0:04:07.703 ******** 2026-04-11 03:23:18.985223 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:23:18.985228 | orchestrator | 2026-04-11 03:23:18.985244 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 03:23:18.985248 | orchestrator | Saturday 11 April 2026 03:23:07 +0000 (0:00:01.398) 0:04:09.101 ******** 2026-04-11 03:23:18.985267 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:18.985271 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:23:18.985276 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:23:18.985279 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:18.985283 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:18.985287 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:18.985291 | orchestrator | 2026-04-11 03:23:18.985295 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-11 03:23:18.985299 | orchestrator | Saturday 11 April 2026 03:23:08 +0000 (0:00:00.912) 0:04:10.014 ******** 2026-04-11 03:23:18.985303 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:18.985307 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:18.985311 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:18.985315 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:23:18.985319 | orchestrator | 2026-04-11 03:23:18.985323 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 03:23:18.985327 | orchestrator | Saturday 11 April 2026 03:23:09 +0000 (0:00:00.934) 0:04:10.949 ******** 2026-04-11 03:23:18.985332 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-11 03:23:18.985335 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-11 03:23:18.985339 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-11 03:23:18.985343 | orchestrator | 2026-04-11 03:23:18.985347 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 03:23:18.985351 | orchestrator | Saturday 11 April 2026 03:23:10 +0000 (0:00:00.980) 0:04:11.929 ******** 2026-04-11 03:23:18.985355 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-11 03:23:18.985359 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-11 03:23:18.985363 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-11 03:23:18.985366 | orchestrator | 2026-04-11 03:23:18.985370 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 03:23:18.985374 | orchestrator | Saturday 11 April 2026 03:23:11 +0000 (0:00:01.283) 0:04:13.213 ******** 2026-04-11 03:23:18.985378 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-11 03:23:18.985382 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:18.985386 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-11 03:23:18.985390 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:23:18.985394 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-11 03:23:18.985397 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:23:18.985401 | orchestrator | 2026-04-11 03:23:18.985405 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-11 03:23:18.985409 | orchestrator | Saturday 11 April 2026 03:23:12 +0000 (0:00:00.603) 0:04:13.817 ******** 2026-04-11 03:23:18.985413 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 03:23:18.985440 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 03:23:18.985444 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 03:23:18.985448 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 03:23:18.985452 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:18.985456 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 03:23:18.985483 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 03:23:18.985492 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:18.985513 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 03:23:18.985520 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 03:23:18.985526 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 03:23:18.985547 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 03:23:18.985554 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:18.985560 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 03:23:18.985567 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 03:23:18.985572 | orchestrator | 2026-04-11 03:23:18.985577 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-11 03:23:18.985618 | orchestrator | Saturday 11 April 2026 03:23:13 +0000 (0:00:01.344) 0:04:15.161 ******** 2026-04-11 03:23:18.985624 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:18.985631 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:18.985637 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:18.985643 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:23:18.985650 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:23:18.985656 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:23:18.985662 | orchestrator | 2026-04-11 03:23:18.985670 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-11 03:23:18.985676 | orchestrator | Saturday 11 April 2026 03:23:15 +0000 (0:00:01.184) 0:04:16.345 ******** 2026-04-11 03:23:18.985683 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:18.985690 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:18.985698 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:18.985705 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:23:18.985713 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:23:18.985720 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:23:18.985726 | orchestrator | 2026-04-11 03:23:18.985809 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-11 03:23:18.985820 | orchestrator | Saturday 11 April 2026 03:23:17 +0000 (0:00:01.889) 0:04:18.235 ******** 2026-04-11 03:23:18.985827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:18.985838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:18.985849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867029 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:20.867295 | orchestrator | 2026-04-11 03:23:20.867304 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 03:23:20.867312 | orchestrator | Saturday 11 April 2026 03:23:19 +0000 (0:00:02.423) 0:04:20.658 ******** 2026-04-11 03:23:20.867333 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:23:20.867341 | orchestrator | 2026-04-11 03:23:20.867357 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-11 03:23:20.867369 | orchestrator | Saturday 11 April 2026 03:23:20 +0000 (0:00:01.410) 0:04:22.069 ******** 2026-04-11 03:23:24.284114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:24.284362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:26.472981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:26.473064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:26.473073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:23:26.473078 | orchestrator | 2026-04-11 03:23:26.473083 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-11 03:23:26.473106 | orchestrator | Saturday 11 April 2026 03:23:24 +0000 (0:00:03.806) 0:04:25.875 ******** 2026-04-11 03:23:26.473112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:23:26.473117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:23:26.473134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:23:26.473138 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:26.473147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:23:26.473151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:23:26.473155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:23:26.473163 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:23:26.473167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:23:26.473175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:23:28.104437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:23:28.104637 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:23:28.104670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:23:28.104682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:23:28.104722 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:28.104732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:23:28.104741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:23:28.104751 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:28.104759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:23:28.104785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:23:28.104795 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:28.104804 | orchestrator | 2026-04-11 03:23:28.104814 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-11 03:23:28.104824 | orchestrator | Saturday 11 April 2026 03:23:26 +0000 (0:00:01.918) 0:04:27.793 ******** 2026-04-11 03:23:28.104839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:23:28.104857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:23:28.104870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:23:28.104886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:23:28.104922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:23:32.900254 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:32.900344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:23:32.900370 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:23:32.900378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:23:32.900384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:23:32.900390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:23:32.900395 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:23:32.900401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:23:32.900419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:23:32.900424 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:32.900432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:23:32.900499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:23:32.900511 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:32.900518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:23:32.900523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:23:32.900528 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:32.900533 | orchestrator | 2026-04-11 03:23:32.900539 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 03:23:32.900545 | orchestrator | Saturday 11 April 2026 03:23:28 +0000 (0:00:02.242) 0:04:30.036 ******** 2026-04-11 03:23:32.900550 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:32.900555 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:32.900560 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:32.900565 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:23:32.900570 | orchestrator | 2026-04-11 03:23:32.900575 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-11 03:23:32.900580 | orchestrator | Saturday 11 April 2026 03:23:30 +0000 (0:00:01.186) 0:04:31.223 ******** 2026-04-11 03:23:32.900585 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:23:32.900590 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 03:23:32.900595 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 03:23:32.900600 | orchestrator | 2026-04-11 03:23:32.900605 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-11 03:23:32.900610 | orchestrator | Saturday 11 April 2026 03:23:31 +0000 (0:00:01.215) 0:04:32.438 ******** 2026-04-11 03:23:32.900615 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:23:32.900619 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 03:23:32.900624 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 03:23:32.900630 | orchestrator | 2026-04-11 03:23:32.900637 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-11 03:23:32.900650 | orchestrator | Saturday 11 April 2026 03:23:32 +0000 (0:00:01.085) 0:04:33.524 ******** 2026-04-11 03:23:32.900655 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:23:32.900661 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:23:32.900672 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:23:55.721135 | orchestrator | 2026-04-11 03:23:55.721244 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-11 03:23:55.721259 | orchestrator | Saturday 11 April 2026 03:23:32 +0000 (0:00:00.582) 0:04:34.106 ******** 2026-04-11 03:23:55.721267 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:23:55.721276 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:23:55.721280 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:23:55.721285 | orchestrator | 2026-04-11 03:23:55.721290 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-11 03:23:55.721296 | orchestrator | Saturday 11 April 2026 03:23:33 +0000 (0:00:00.559) 0:04:34.666 ******** 2026-04-11 03:23:55.721301 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-11 03:23:55.721306 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-11 03:23:55.721311 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-11 03:23:55.721316 | orchestrator | 2026-04-11 03:23:55.721333 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-11 03:23:55.721338 | orchestrator | Saturday 11 April 2026 03:23:34 +0000 (0:00:01.394) 0:04:36.060 ******** 2026-04-11 03:23:55.721343 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-11 03:23:55.721348 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-11 03:23:55.721352 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-11 03:23:55.721357 | orchestrator | 2026-04-11 03:23:55.721361 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-11 03:23:55.721366 | orchestrator | Saturday 11 April 2026 03:23:36 +0000 (0:00:01.233) 0:04:37.294 ******** 2026-04-11 03:23:55.721371 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-11 03:23:55.721376 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-11 03:23:55.721380 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-11 03:23:55.721385 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-11 03:23:55.721390 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-11 03:23:55.721394 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-11 03:23:55.721399 | orchestrator | 2026-04-11 03:23:55.721404 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-11 03:23:55.721411 | orchestrator | Saturday 11 April 2026 03:23:40 +0000 (0:00:04.092) 0:04:41.386 ******** 2026-04-11 03:23:55.721419 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:55.721523 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:23:55.721532 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:23:55.721541 | orchestrator | 2026-04-11 03:23:55.721545 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-11 03:23:55.721550 | orchestrator | Saturday 11 April 2026 03:23:40 +0000 (0:00:00.346) 0:04:41.733 ******** 2026-04-11 03:23:55.721555 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:55.721559 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:23:55.721564 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:23:55.721570 | orchestrator | 2026-04-11 03:23:55.721577 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-11 03:23:55.721584 | orchestrator | Saturday 11 April 2026 03:23:41 +0000 (0:00:00.583) 0:04:42.317 ******** 2026-04-11 03:23:55.721593 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:23:55.721603 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:23:55.721610 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:23:55.721617 | orchestrator | 2026-04-11 03:23:55.721624 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-11 03:23:55.721656 | orchestrator | Saturday 11 April 2026 03:23:42 +0000 (0:00:01.449) 0:04:43.766 ******** 2026-04-11 03:23:55.721664 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-11 03:23:55.721672 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-11 03:23:55.721678 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-11 03:23:55.721685 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-11 03:23:55.721692 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-11 03:23:55.721700 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-11 03:23:55.721706 | orchestrator | 2026-04-11 03:23:55.721713 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-11 03:23:55.721720 | orchestrator | Saturday 11 April 2026 03:23:46 +0000 (0:00:03.578) 0:04:47.345 ******** 2026-04-11 03:23:55.721728 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 03:23:55.721734 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 03:23:55.721741 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 03:23:55.721747 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-11 03:23:55.721754 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:23:55.721761 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-11 03:23:55.721768 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:23:55.721775 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-11 03:23:55.721782 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:23:55.721788 | orchestrator | 2026-04-11 03:23:55.721796 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-11 03:23:55.721802 | orchestrator | Saturday 11 April 2026 03:23:49 +0000 (0:00:03.572) 0:04:50.917 ******** 2026-04-11 03:23:55.721822 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:55.721826 | orchestrator | 2026-04-11 03:23:55.721831 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-11 03:23:55.721835 | orchestrator | Saturday 11 April 2026 03:23:49 +0000 (0:00:00.160) 0:04:51.077 ******** 2026-04-11 03:23:55.721840 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:55.721844 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:23:55.721848 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:23:55.721852 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:55.721856 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:55.721860 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:55.721865 | orchestrator | 2026-04-11 03:23:55.721869 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-11 03:23:55.721873 | orchestrator | Saturday 11 April 2026 03:23:50 +0000 (0:00:00.929) 0:04:52.007 ******** 2026-04-11 03:23:55.721877 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:23:55.721882 | orchestrator | 2026-04-11 03:23:55.721892 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-11 03:23:55.721896 | orchestrator | Saturday 11 April 2026 03:23:51 +0000 (0:00:00.765) 0:04:52.773 ******** 2026-04-11 03:23:55.721900 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:23:55.721904 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:23:55.721908 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:23:55.721913 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:23:55.721917 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:23:55.721921 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:23:55.721925 | orchestrator | 2026-04-11 03:23:55.721936 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-11 03:23:55.721940 | orchestrator | Saturday 11 April 2026 03:23:52 +0000 (0:00:00.874) 0:04:53.647 ******** 2026-04-11 03:23:55.721947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:55.721955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:55.721960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:23:55.721970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:01.047615 | orchestrator | 2026-04-11 03:24:01.047625 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-11 03:24:01.047635 | orchestrator | Saturday 11 April 2026 03:23:56 +0000 (0:00:03.800) 0:04:57.447 ******** 2026-04-11 03:24:01.047649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:24:03.398099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:24:03.398276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:24:03.398296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:24:03.398307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:24:03.398319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:24:03.398350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:03.398380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:03.398393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:03.398405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:03.398460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:03.398474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:03.398494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:23.209220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:23.209300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:23.209308 | orchestrator | 2026-04-11 03:24:23.209314 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-11 03:24:23.209321 | orchestrator | Saturday 11 April 2026 03:24:03 +0000 (0:00:07.154) 0:05:04.602 ******** 2026-04-11 03:24:23.209325 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:24:23.209332 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:24:23.209337 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:24:23.209341 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:23.209346 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:23.209350 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:23.209355 | orchestrator | 2026-04-11 03:24:23.209360 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-11 03:24:23.209364 | orchestrator | Saturday 11 April 2026 03:24:04 +0000 (0:00:01.528) 0:05:06.130 ******** 2026-04-11 03:24:23.209369 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 03:24:23.209374 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 03:24:23.209379 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 03:24:23.209384 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 03:24:23.209388 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 03:24:23.209393 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 03:24:23.209431 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 03:24:23.209437 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:23.209442 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 03:24:23.209447 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:23.209452 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 03:24:23.209456 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:23.209461 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 03:24:23.209481 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 03:24:23.209486 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 03:24:23.209491 | orchestrator | 2026-04-11 03:24:23.209496 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-11 03:24:23.209501 | orchestrator | Saturday 11 April 2026 03:24:08 +0000 (0:00:04.066) 0:05:10.197 ******** 2026-04-11 03:24:23.209505 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:24:23.209510 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:24:23.209515 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:24:23.209519 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:23.209524 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:23.209528 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:23.209533 | orchestrator | 2026-04-11 03:24:23.209538 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-11 03:24:23.209542 | orchestrator | Saturday 11 April 2026 03:24:09 +0000 (0:00:00.653) 0:05:10.850 ******** 2026-04-11 03:24:23.209547 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 03:24:23.209552 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 03:24:23.209557 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 03:24:23.209561 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 03:24:23.209576 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 03:24:23.209585 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 03:24:23.209590 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 03:24:23.209594 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 03:24:23.209599 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 03:24:23.209603 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 03:24:23.209608 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:23.209613 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 03:24:23.209617 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:23.209622 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 03:24:23.209626 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:23.209631 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 03:24:23.209635 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 03:24:23.209640 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 03:24:23.209644 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 03:24:23.209649 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 03:24:23.209653 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 03:24:23.209658 | orchestrator | 2026-04-11 03:24:23.209667 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-11 03:24:23.209672 | orchestrator | Saturday 11 April 2026 03:24:15 +0000 (0:00:05.490) 0:05:16.341 ******** 2026-04-11 03:24:23.209676 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 03:24:23.209681 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 03:24:23.209685 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 03:24:23.209690 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 03:24:23.209695 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 03:24:23.209699 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 03:24:23.209704 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 03:24:23.209708 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 03:24:23.209713 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 03:24:23.209718 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 03:24:23.209722 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 03:24:23.209727 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 03:24:23.209731 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 03:24:23.209736 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:23.209741 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 03:24:23.209745 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 03:24:23.209750 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:23.209755 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 03:24:23.209759 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 03:24:23.209764 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:23.209768 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 03:24:23.209773 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 03:24:23.209779 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 03:24:23.209785 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 03:24:23.209790 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 03:24:23.209798 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 03:24:28.234271 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 03:24:28.234372 | orchestrator | 2026-04-11 03:24:28.234383 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-11 03:24:28.234440 | orchestrator | Saturday 11 April 2026 03:24:23 +0000 (0:00:08.051) 0:05:24.392 ******** 2026-04-11 03:24:28.235284 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:24:28.235318 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:24:28.235326 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:24:28.235333 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:28.235339 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:28.235345 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:28.235351 | orchestrator | 2026-04-11 03:24:28.235358 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-11 03:24:28.235518 | orchestrator | Saturday 11 April 2026 03:24:24 +0000 (0:00:00.911) 0:05:25.304 ******** 2026-04-11 03:24:28.235539 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:24:28.235546 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:24:28.235552 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:24:28.235558 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:28.235565 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:28.235571 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:28.235577 | orchestrator | 2026-04-11 03:24:28.235583 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-11 03:24:28.235590 | orchestrator | Saturday 11 April 2026 03:24:24 +0000 (0:00:00.705) 0:05:26.009 ******** 2026-04-11 03:24:28.235596 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:28.235603 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:24:28.235609 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:28.235615 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:28.235621 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:24:28.235627 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:24:28.235634 | orchestrator | 2026-04-11 03:24:28.235640 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-11 03:24:28.235647 | orchestrator | Saturday 11 April 2026 03:24:27 +0000 (0:00:02.232) 0:05:28.242 ******** 2026-04-11 03:24:28.235656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:24:28.235675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:24:28.235685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:24:28.235729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:24:28.235745 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:24:28.235751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:24:28.235758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:24:28.235765 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:24:28.235771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 03:24:28.235778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 03:24:28.235794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 03:24:32.402558 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:24:32.402659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:24:32.402671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:24:32.402678 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:32.402685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:24:32.402691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:24:32.402697 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:32.402703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 03:24:32.402709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 03:24:32.402738 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:32.402745 | orchestrator | 2026-04-11 03:24:32.402751 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-11 03:24:32.402758 | orchestrator | Saturday 11 April 2026 03:24:28 +0000 (0:00:01.549) 0:05:29.791 ******** 2026-04-11 03:24:32.402776 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-11 03:24:32.402795 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-11 03:24:32.402801 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:24:32.402806 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-11 03:24:32.402812 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-11 03:24:32.402817 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:24:32.402823 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-11 03:24:32.402828 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-11 03:24:32.402834 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:24:32.402839 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-11 03:24:32.402845 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-11 03:24:32.402850 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:32.402855 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-11 03:24:32.402861 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-11 03:24:32.402866 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:32.402872 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-11 03:24:32.402878 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-11 03:24:32.402883 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:32.402888 | orchestrator | 2026-04-11 03:24:32.402894 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-11 03:24:32.402900 | orchestrator | Saturday 11 April 2026 03:24:29 +0000 (0:00:01.001) 0:05:30.792 ******** 2026-04-11 03:24:32.402907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:24:32.402915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:24:32.402927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 03:24:32.402943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 03:24:34.971361 | orchestrator | 2026-04-11 03:24:34.971373 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 03:24:34.971416 | orchestrator | Saturday 11 April 2026 03:24:32 +0000 (0:00:03.316) 0:05:34.109 ******** 2026-04-11 03:24:34.971428 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:24:34.971440 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:24:34.971465 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:24:34.971476 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:24:34.971495 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:24:34.971505 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:24:34.971516 | orchestrator | 2026-04-11 03:24:34.971526 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 03:24:34.971537 | orchestrator | Saturday 11 April 2026 03:24:33 +0000 (0:00:00.918) 0:05:35.027 ******** 2026-04-11 03:24:34.971547 | orchestrator | 2026-04-11 03:24:34.971558 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 03:24:34.971575 | orchestrator | Saturday 11 April 2026 03:24:33 +0000 (0:00:00.159) 0:05:35.187 ******** 2026-04-11 03:24:34.971586 | orchestrator | 2026-04-11 03:24:34.971598 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 03:24:34.971608 | orchestrator | Saturday 11 April 2026 03:24:34 +0000 (0:00:00.165) 0:05:35.352 ******** 2026-04-11 03:24:34.971619 | orchestrator | 2026-04-11 03:24:34.971630 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 03:24:34.971648 | orchestrator | Saturday 11 April 2026 03:24:34 +0000 (0:00:00.162) 0:05:35.514 ******** 2026-04-11 03:27:49.117791 | orchestrator | 2026-04-11 03:27:49.117917 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 03:27:49.117941 | orchestrator | Saturday 11 April 2026 03:24:34 +0000 (0:00:00.157) 0:05:35.672 ******** 2026-04-11 03:27:49.117954 | orchestrator | 2026-04-11 03:27:49.117967 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 03:27:49.117976 | orchestrator | Saturday 11 April 2026 03:24:34 +0000 (0:00:00.338) 0:05:36.010 ******** 2026-04-11 03:27:49.117984 | orchestrator | 2026-04-11 03:27:49.117993 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-11 03:27:49.118001 | orchestrator | Saturday 11 April 2026 03:24:34 +0000 (0:00:00.151) 0:05:36.162 ******** 2026-04-11 03:27:49.118010 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:27:49.118095 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:27:49.118109 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:27:49.118122 | orchestrator | 2026-04-11 03:27:49.118137 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-11 03:27:49.118150 | orchestrator | Saturday 11 April 2026 03:24:42 +0000 (0:00:07.353) 0:05:43.515 ******** 2026-04-11 03:27:49.118164 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:27:49.118177 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:27:49.118190 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:27:49.118234 | orchestrator | 2026-04-11 03:27:49.118247 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-11 03:27:49.118355 | orchestrator | Saturday 11 April 2026 03:24:58 +0000 (0:00:16.240) 0:05:59.756 ******** 2026-04-11 03:27:49.118375 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:27:49.118390 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:27:49.118404 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:27:49.118419 | orchestrator | 2026-04-11 03:27:49.118432 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-11 03:27:49.118446 | orchestrator | Saturday 11 April 2026 03:25:20 +0000 (0:00:21.562) 0:06:21.319 ******** 2026-04-11 03:27:49.118459 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:27:49.118473 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:27:49.118487 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:27:49.118496 | orchestrator | 2026-04-11 03:27:49.118509 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-11 03:27:49.118523 | orchestrator | Saturday 11 April 2026 03:25:58 +0000 (0:00:37.983) 0:06:59.302 ******** 2026-04-11 03:27:49.118537 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-04-11 03:27:49.118551 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-11 03:27:49.118564 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-11 03:27:49.118578 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:27:49.118591 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:27:49.118605 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:27:49.118618 | orchestrator | 2026-04-11 03:27:49.118632 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-11 03:27:49.118644 | orchestrator | Saturday 11 April 2026 03:26:04 +0000 (0:00:06.270) 0:07:05.573 ******** 2026-04-11 03:27:49.118658 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:27:49.118671 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:27:49.118684 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:27:49.118697 | orchestrator | 2026-04-11 03:27:49.118712 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-11 03:27:49.118725 | orchestrator | Saturday 11 April 2026 03:26:05 +0000 (0:00:00.817) 0:07:06.391 ******** 2026-04-11 03:27:49.118737 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:27:49.118751 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:27:49.118765 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:27:49.118778 | orchestrator | 2026-04-11 03:27:49.118792 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-11 03:27:49.118806 | orchestrator | Saturday 11 April 2026 03:26:37 +0000 (0:00:32.186) 0:07:38.578 ******** 2026-04-11 03:27:49.118819 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:27:49.118832 | orchestrator | 2026-04-11 03:27:49.118846 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-11 03:27:49.118859 | orchestrator | Saturday 11 April 2026 03:26:37 +0000 (0:00:00.139) 0:07:38.718 ******** 2026-04-11 03:27:49.118872 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:27:49.118885 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:27:49.118899 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:27:49.118912 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:27:49.118925 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:27:49.118939 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-11 03:27:49.118953 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:27:49.118966 | orchestrator | 2026-04-11 03:27:49.118979 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-11 03:27:49.118992 | orchestrator | Saturday 11 April 2026 03:27:00 +0000 (0:00:22.964) 0:08:01.682 ******** 2026-04-11 03:27:49.119021 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:27:49.119034 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:27:49.119048 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:27:49.119061 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:27:49.119074 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:27:49.119087 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:27:49.119100 | orchestrator | 2026-04-11 03:27:49.119129 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-11 03:27:49.119143 | orchestrator | Saturday 11 April 2026 03:27:11 +0000 (0:00:11.299) 0:08:12.981 ******** 2026-04-11 03:27:49.119156 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:27:49.119170 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:27:49.119183 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:27:49.119196 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:27:49.119209 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:27:49.119251 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-11 03:27:49.119292 | orchestrator | 2026-04-11 03:27:49.119306 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-11 03:27:49.119320 | orchestrator | Saturday 11 April 2026 03:27:17 +0000 (0:00:05.572) 0:08:18.554 ******** 2026-04-11 03:27:49.119333 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:27:49.119346 | orchestrator | 2026-04-11 03:27:49.119359 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-11 03:27:49.119373 | orchestrator | Saturday 11 April 2026 03:27:30 +0000 (0:00:13.190) 0:08:31.744 ******** 2026-04-11 03:27:49.119386 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:27:49.119399 | orchestrator | 2026-04-11 03:27:49.119413 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-11 03:27:49.119424 | orchestrator | Saturday 11 April 2026 03:27:32 +0000 (0:00:01.737) 0:08:33.481 ******** 2026-04-11 03:27:49.119435 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:27:49.119446 | orchestrator | 2026-04-11 03:27:49.119457 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-11 03:27:49.119471 | orchestrator | Saturday 11 April 2026 03:27:34 +0000 (0:00:01.973) 0:08:35.455 ******** 2026-04-11 03:27:49.119483 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 03:27:49.119496 | orchestrator | 2026-04-11 03:27:49.119509 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-11 03:27:49.119523 | orchestrator | Saturday 11 April 2026 03:27:44 +0000 (0:00:10.474) 0:08:45.929 ******** 2026-04-11 03:27:49.119537 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:27:49.119551 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:27:49.119564 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:27:49.119577 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:27:49.119590 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:27:49.119604 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:27:49.119617 | orchestrator | 2026-04-11 03:27:49.119631 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-11 03:27:49.119644 | orchestrator | 2026-04-11 03:27:49.119657 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-11 03:27:49.119670 | orchestrator | Saturday 11 April 2026 03:27:46 +0000 (0:00:01.947) 0:08:47.877 ******** 2026-04-11 03:27:49.119683 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:27:49.119697 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:27:49.119711 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:27:49.119724 | orchestrator | 2026-04-11 03:27:49.119737 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-11 03:27:49.119751 | orchestrator | 2026-04-11 03:27:49.119763 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-11 03:27:49.119777 | orchestrator | Saturday 11 April 2026 03:27:47 +0000 (0:00:00.959) 0:08:48.836 ******** 2026-04-11 03:27:49.119790 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:27:49.119814 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:27:49.119828 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:27:49.119841 | orchestrator | 2026-04-11 03:27:49.119854 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-11 03:27:49.119867 | orchestrator | 2026-04-11 03:27:49.119881 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-11 03:27:49.119895 | orchestrator | Saturday 11 April 2026 03:27:48 +0000 (0:00:00.812) 0:08:49.649 ******** 2026-04-11 03:27:49.119908 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-11 03:27:49.119921 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-11 03:27:49.119934 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-11 03:27:49.119947 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-11 03:27:49.119960 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-11 03:27:49.119974 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-11 03:27:49.119988 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:27:49.120002 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-11 03:27:49.120014 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-11 03:27:49.120028 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-11 03:27:49.120041 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-11 03:27:49.120055 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-11 03:27:49.120068 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-11 03:27:49.120082 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:27:49.120094 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-11 03:27:49.120107 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-11 03:27:49.120121 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-11 03:27:49.120134 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-11 03:27:49.120147 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-11 03:27:49.120161 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-11 03:27:49.120174 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:27:49.120187 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-11 03:27:49.120200 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-11 03:27:49.120221 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-11 03:27:49.120235 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-11 03:27:49.120249 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-11 03:27:49.120284 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-11 03:27:49.120298 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:27:49.120311 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-11 03:27:49.120333 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-11 03:27:52.559919 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-11 03:27:52.561113 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-11 03:27:52.561203 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-11 03:27:52.561229 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-11 03:27:52.561247 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:27:52.561301 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-11 03:27:52.561320 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-11 03:27:52.561336 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-11 03:27:52.561353 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-11 03:27:52.561401 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-11 03:27:52.561413 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-11 03:27:52.561422 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:27:52.561433 | orchestrator | 2026-04-11 03:27:52.561443 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-11 03:27:52.561453 | orchestrator | 2026-04-11 03:27:52.561463 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-11 03:27:52.561473 | orchestrator | Saturday 11 April 2026 03:27:49 +0000 (0:00:01.520) 0:08:51.169 ******** 2026-04-11 03:27:52.561483 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-11 03:27:52.561493 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-11 03:27:52.561503 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:27:52.561512 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-11 03:27:52.561522 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-11 03:27:52.561534 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:27:52.561551 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-11 03:27:52.561567 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-11 03:27:52.561582 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:27:52.561598 | orchestrator | 2026-04-11 03:27:52.561613 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-11 03:27:52.561628 | orchestrator | 2026-04-11 03:27:52.561644 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-11 03:27:52.561662 | orchestrator | Saturday 11 April 2026 03:27:50 +0000 (0:00:00.637) 0:08:51.806 ******** 2026-04-11 03:27:52.561678 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:27:52.561692 | orchestrator | 2026-04-11 03:27:52.561707 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-11 03:27:52.561723 | orchestrator | 2026-04-11 03:27:52.561740 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-11 03:27:52.561755 | orchestrator | Saturday 11 April 2026 03:27:51 +0000 (0:00:00.938) 0:08:52.745 ******** 2026-04-11 03:27:52.561773 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:27:52.561790 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:27:52.561808 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:27:52.561825 | orchestrator | 2026-04-11 03:27:52.561840 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:27:52.561856 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:27:52.561876 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-04-11 03:27:52.561892 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-11 03:27:52.561909 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-11 03:27:52.561925 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-11 03:27:52.561942 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-11 03:27:52.561956 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-11 03:27:52.561966 | orchestrator | 2026-04-11 03:27:52.561976 | orchestrator | 2026-04-11 03:27:52.561986 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:27:52.562008 | orchestrator | Saturday 11 April 2026 03:27:52 +0000 (0:00:00.532) 0:08:53.277 ******** 2026-04-11 03:27:52.562079 | orchestrator | =============================================================================== 2026-04-11 03:27:52.562100 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.98s 2026-04-11 03:27:52.562118 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.72s 2026-04-11 03:27:52.562152 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 32.19s 2026-04-11 03:27:52.562169 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.21s 2026-04-11 03:27:52.562225 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.21s 2026-04-11 03:27:52.562243 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.96s 2026-04-11 03:27:52.562498 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.56s 2026-04-11 03:27:52.562538 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.06s 2026-04-11 03:27:52.562553 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.24s 2026-04-11 03:27:52.562566 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.73s 2026-04-11 03:27:52.562579 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.19s 2026-04-11 03:27:52.562592 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.19s 2026-04-11 03:27:52.562606 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.99s 2026-04-11 03:27:52.562620 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.88s 2026-04-11 03:27:52.562635 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.30s 2026-04-11 03:27:52.562650 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.47s 2026-04-11 03:27:52.562666 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.05s 2026-04-11 03:27:52.562680 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.74s 2026-04-11 03:27:52.562693 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.35s 2026-04-11 03:27:52.562708 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.35s 2026-04-11 03:27:55.139226 | orchestrator | 2026-04-11 03:27:55 | INFO  | Task cea88841-25a5-4da2-8d82-9908baadea02 (horizon) was prepared for execution. 2026-04-11 03:27:55.139453 | orchestrator | 2026-04-11 03:27:55 | INFO  | It takes a moment until task cea88841-25a5-4da2-8d82-9908baadea02 (horizon) has been started and output is visible here. 2026-04-11 03:28:03.039961 | orchestrator | 2026-04-11 03:28:03.040089 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:28:03.040119 | orchestrator | 2026-04-11 03:28:03.040141 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:28:03.040156 | orchestrator | Saturday 11 April 2026 03:27:59 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-04-11 03:28:03.040167 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:03.040185 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:03.040204 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:03.040223 | orchestrator | 2026-04-11 03:28:03.040244 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:28:03.040301 | orchestrator | Saturday 11 April 2026 03:28:00 +0000 (0:00:00.352) 0:00:00.632 ******** 2026-04-11 03:28:03.040324 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-11 03:28:03.040342 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-11 03:28:03.040354 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-11 03:28:03.040366 | orchestrator | 2026-04-11 03:28:03.040376 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-11 03:28:03.040387 | orchestrator | 2026-04-11 03:28:03.040399 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 03:28:03.040436 | orchestrator | Saturday 11 April 2026 03:28:00 +0000 (0:00:00.510) 0:00:01.143 ******** 2026-04-11 03:28:03.040448 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:28:03.040462 | orchestrator | 2026-04-11 03:28:03.040476 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-11 03:28:03.040489 | orchestrator | Saturday 11 April 2026 03:28:01 +0000 (0:00:00.563) 0:00:01.707 ******** 2026-04-11 03:28:03.040528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:28:03.040574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:28:03.040608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:28:03.040623 | orchestrator | 2026-04-11 03:28:03.040636 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-11 03:28:03.040650 | orchestrator | Saturday 11 April 2026 03:28:02 +0000 (0:00:01.216) 0:00:02.924 ******** 2026-04-11 03:28:03.040662 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:03.040675 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:03.040689 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:03.040702 | orchestrator | 2026-04-11 03:28:03.040716 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 03:28:03.040729 | orchestrator | Saturday 11 April 2026 03:28:02 +0000 (0:00:00.505) 0:00:03.429 ******** 2026-04-11 03:28:03.040750 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 03:28:09.563054 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 03:28:09.563160 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 03:28:09.563197 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 03:28:09.563209 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 03:28:09.563219 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 03:28:09.563229 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-11 03:28:09.563239 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 03:28:09.563302 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 03:28:09.563312 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 03:28:09.563322 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 03:28:09.563332 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 03:28:09.563342 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 03:28:09.563351 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 03:28:09.563361 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-11 03:28:09.563370 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 03:28:09.563380 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 03:28:09.563390 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 03:28:09.563399 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 03:28:09.563409 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 03:28:09.563418 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 03:28:09.563428 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 03:28:09.563437 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-11 03:28:09.563447 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 03:28:09.563458 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-11 03:28:09.563484 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-11 03:28:09.563494 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-11 03:28:09.563504 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-11 03:28:09.563514 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-11 03:28:09.563524 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-11 03:28:09.563534 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-11 03:28:09.563543 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-11 03:28:09.563553 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-11 03:28:09.563572 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-11 03:28:09.563584 | orchestrator | 2026-04-11 03:28:09.563596 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:09.563608 | orchestrator | Saturday 11 April 2026 03:28:03 +0000 (0:00:00.860) 0:00:04.290 ******** 2026-04-11 03:28:09.563620 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:09.563631 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:09.563643 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:09.563654 | orchestrator | 2026-04-11 03:28:09.563665 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:09.563676 | orchestrator | Saturday 11 April 2026 03:28:04 +0000 (0:00:00.369) 0:00:04.659 ******** 2026-04-11 03:28:09.563688 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.563699 | orchestrator | 2026-04-11 03:28:09.563726 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:09.563738 | orchestrator | Saturday 11 April 2026 03:28:04 +0000 (0:00:00.336) 0:00:04.995 ******** 2026-04-11 03:28:09.563750 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.563761 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:09.563772 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:09.563784 | orchestrator | 2026-04-11 03:28:09.563795 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:09.563807 | orchestrator | Saturday 11 April 2026 03:28:04 +0000 (0:00:00.329) 0:00:05.325 ******** 2026-04-11 03:28:09.563818 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:09.563830 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:09.563841 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:09.563852 | orchestrator | 2026-04-11 03:28:09.563862 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:09.563872 | orchestrator | Saturday 11 April 2026 03:28:05 +0000 (0:00:00.342) 0:00:05.667 ******** 2026-04-11 03:28:09.563881 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.563891 | orchestrator | 2026-04-11 03:28:09.563901 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:09.563911 | orchestrator | Saturday 11 April 2026 03:28:05 +0000 (0:00:00.171) 0:00:05.839 ******** 2026-04-11 03:28:09.563920 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.563930 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:09.563940 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:09.563949 | orchestrator | 2026-04-11 03:28:09.563959 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:09.563969 | orchestrator | Saturday 11 April 2026 03:28:05 +0000 (0:00:00.338) 0:00:06.178 ******** 2026-04-11 03:28:09.563978 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:09.563988 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:09.563997 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:09.564007 | orchestrator | 2026-04-11 03:28:09.564017 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:09.564026 | orchestrator | Saturday 11 April 2026 03:28:06 +0000 (0:00:00.544) 0:00:06.722 ******** 2026-04-11 03:28:09.564036 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.564045 | orchestrator | 2026-04-11 03:28:09.564055 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:09.564065 | orchestrator | Saturday 11 April 2026 03:28:06 +0000 (0:00:00.132) 0:00:06.855 ******** 2026-04-11 03:28:09.564074 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.564084 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:09.564094 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:09.564103 | orchestrator | 2026-04-11 03:28:09.564113 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:09.564122 | orchestrator | Saturday 11 April 2026 03:28:06 +0000 (0:00:00.346) 0:00:07.202 ******** 2026-04-11 03:28:09.564138 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:09.564148 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:09.564158 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:09.564167 | orchestrator | 2026-04-11 03:28:09.564177 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:09.564187 | orchestrator | Saturday 11 April 2026 03:28:06 +0000 (0:00:00.342) 0:00:07.544 ******** 2026-04-11 03:28:09.564197 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.564206 | orchestrator | 2026-04-11 03:28:09.564216 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:09.564230 | orchestrator | Saturday 11 April 2026 03:28:07 +0000 (0:00:00.138) 0:00:07.682 ******** 2026-04-11 03:28:09.564240 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.564270 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:09.564279 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:09.564289 | orchestrator | 2026-04-11 03:28:09.564299 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:09.564308 | orchestrator | Saturday 11 April 2026 03:28:07 +0000 (0:00:00.570) 0:00:08.253 ******** 2026-04-11 03:28:09.564318 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:09.564328 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:09.564337 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:09.564347 | orchestrator | 2026-04-11 03:28:09.564356 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:09.564366 | orchestrator | Saturday 11 April 2026 03:28:08 +0000 (0:00:00.346) 0:00:08.599 ******** 2026-04-11 03:28:09.564375 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.564385 | orchestrator | 2026-04-11 03:28:09.564395 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:09.564404 | orchestrator | Saturday 11 April 2026 03:28:08 +0000 (0:00:00.145) 0:00:08.745 ******** 2026-04-11 03:28:09.564414 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.564423 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:09.564433 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:09.564443 | orchestrator | 2026-04-11 03:28:09.564452 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:09.564462 | orchestrator | Saturday 11 April 2026 03:28:08 +0000 (0:00:00.321) 0:00:09.067 ******** 2026-04-11 03:28:09.564472 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:09.564481 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:09.564491 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:09.564500 | orchestrator | 2026-04-11 03:28:09.564510 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:09.564519 | orchestrator | Saturday 11 April 2026 03:28:08 +0000 (0:00:00.354) 0:00:09.422 ******** 2026-04-11 03:28:09.564529 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.564538 | orchestrator | 2026-04-11 03:28:09.564548 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:09.564557 | orchestrator | Saturday 11 April 2026 03:28:09 +0000 (0:00:00.348) 0:00:09.770 ******** 2026-04-11 03:28:09.564567 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:09.564576 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:09.564586 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:09.564596 | orchestrator | 2026-04-11 03:28:09.564605 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:09.564621 | orchestrator | Saturday 11 April 2026 03:28:09 +0000 (0:00:00.331) 0:00:10.102 ******** 2026-04-11 03:28:24.543644 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:24.543719 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:24.543723 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:24.543728 | orchestrator | 2026-04-11 03:28:24.543733 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:24.543739 | orchestrator | Saturday 11 April 2026 03:28:09 +0000 (0:00:00.331) 0:00:10.434 ******** 2026-04-11 03:28:24.543743 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.543763 | orchestrator | 2026-04-11 03:28:24.543767 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:24.543771 | orchestrator | Saturday 11 April 2026 03:28:10 +0000 (0:00:00.147) 0:00:10.581 ******** 2026-04-11 03:28:24.543775 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.543779 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:24.543783 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:24.543787 | orchestrator | 2026-04-11 03:28:24.543791 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:24.543795 | orchestrator | Saturday 11 April 2026 03:28:10 +0000 (0:00:00.326) 0:00:10.907 ******** 2026-04-11 03:28:24.543799 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:24.543803 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:24.543806 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:24.543810 | orchestrator | 2026-04-11 03:28:24.543814 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:24.543818 | orchestrator | Saturday 11 April 2026 03:28:10 +0000 (0:00:00.610) 0:00:11.518 ******** 2026-04-11 03:28:24.543821 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.543825 | orchestrator | 2026-04-11 03:28:24.543829 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:24.543833 | orchestrator | Saturday 11 April 2026 03:28:11 +0000 (0:00:00.149) 0:00:11.667 ******** 2026-04-11 03:28:24.543836 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.543840 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:24.543844 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:24.543847 | orchestrator | 2026-04-11 03:28:24.543851 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:24.543855 | orchestrator | Saturday 11 April 2026 03:28:11 +0000 (0:00:00.324) 0:00:11.991 ******** 2026-04-11 03:28:24.543859 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:24.543863 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:24.543866 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:24.543870 | orchestrator | 2026-04-11 03:28:24.543874 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:24.543878 | orchestrator | Saturday 11 April 2026 03:28:11 +0000 (0:00:00.379) 0:00:12.371 ******** 2026-04-11 03:28:24.543881 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.543885 | orchestrator | 2026-04-11 03:28:24.543889 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:24.543893 | orchestrator | Saturday 11 April 2026 03:28:11 +0000 (0:00:00.165) 0:00:12.536 ******** 2026-04-11 03:28:24.543896 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.543900 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:24.543904 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:24.543907 | orchestrator | 2026-04-11 03:28:24.543911 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 03:28:24.543915 | orchestrator | Saturday 11 April 2026 03:28:12 +0000 (0:00:00.624) 0:00:13.161 ******** 2026-04-11 03:28:24.543919 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:28:24.543932 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:28:24.543936 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:28:24.543939 | orchestrator | 2026-04-11 03:28:24.543943 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 03:28:24.543947 | orchestrator | Saturday 11 April 2026 03:28:12 +0000 (0:00:00.364) 0:00:13.525 ******** 2026-04-11 03:28:24.543951 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.543954 | orchestrator | 2026-04-11 03:28:24.543958 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 03:28:24.543962 | orchestrator | Saturday 11 April 2026 03:28:13 +0000 (0:00:00.161) 0:00:13.687 ******** 2026-04-11 03:28:24.543966 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.543972 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:24.543978 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:24.543984 | orchestrator | 2026-04-11 03:28:24.544000 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-11 03:28:24.544008 | orchestrator | Saturday 11 April 2026 03:28:13 +0000 (0:00:00.328) 0:00:14.016 ******** 2026-04-11 03:28:24.544014 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:28:24.544020 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:28:24.544026 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:28:24.544031 | orchestrator | 2026-04-11 03:28:24.544037 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-11 03:28:24.544043 | orchestrator | Saturday 11 April 2026 03:28:15 +0000 (0:00:01.890) 0:00:15.906 ******** 2026-04-11 03:28:24.544049 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 03:28:24.544056 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 03:28:24.544063 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 03:28:24.544069 | orchestrator | 2026-04-11 03:28:24.544076 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-11 03:28:24.544082 | orchestrator | Saturday 11 April 2026 03:28:17 +0000 (0:00:02.020) 0:00:17.926 ******** 2026-04-11 03:28:24.544088 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 03:28:24.544096 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 03:28:24.544102 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 03:28:24.544108 | orchestrator | 2026-04-11 03:28:24.544113 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-11 03:28:24.544132 | orchestrator | Saturday 11 April 2026 03:28:19 +0000 (0:00:01.960) 0:00:19.887 ******** 2026-04-11 03:28:24.544140 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 03:28:24.544146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 03:28:24.544152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 03:28:24.544157 | orchestrator | 2026-04-11 03:28:24.544163 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-11 03:28:24.544170 | orchestrator | Saturday 11 April 2026 03:28:20 +0000 (0:00:01.598) 0:00:21.485 ******** 2026-04-11 03:28:24.544176 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.544182 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:24.544189 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:24.544195 | orchestrator | 2026-04-11 03:28:24.544201 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-11 03:28:24.544207 | orchestrator | Saturday 11 April 2026 03:28:21 +0000 (0:00:00.567) 0:00:22.053 ******** 2026-04-11 03:28:24.544214 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:24.544221 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:24.544229 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:24.544259 | orchestrator | 2026-04-11 03:28:24.544265 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 03:28:24.544270 | orchestrator | Saturday 11 April 2026 03:28:21 +0000 (0:00:00.332) 0:00:22.385 ******** 2026-04-11 03:28:24.544274 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:28:24.544279 | orchestrator | 2026-04-11 03:28:24.544283 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-11 03:28:24.544287 | orchestrator | Saturday 11 April 2026 03:28:22 +0000 (0:00:00.654) 0:00:23.039 ******** 2026-04-11 03:28:24.544302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:28:24.544321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:28:25.238612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:28:25.238723 | orchestrator | 2026-04-11 03:28:25.238745 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-11 03:28:25.238763 | orchestrator | Saturday 11 April 2026 03:28:24 +0000 (0:00:02.034) 0:00:25.074 ******** 2026-04-11 03:28:25.238800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 03:28:25.238844 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:25.238881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 03:28:25.238898 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:25.238931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 03:28:28.023359 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:28.023433 | orchestrator | 2026-04-11 03:28:28.023440 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-11 03:28:28.023446 | orchestrator | Saturday 11 April 2026 03:28:25 +0000 (0:00:00.702) 0:00:25.777 ******** 2026-04-11 03:28:28.023454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 03:28:28.023462 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:28:28.023478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 03:28:28.023502 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:28:28.023541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 03:28:28.023550 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:28:28.023555 | orchestrator | 2026-04-11 03:28:28.023559 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-11 03:28:28.023564 | orchestrator | Saturday 11 April 2026 03:28:26 +0000 (0:00:00.900) 0:00:26.677 ******** 2026-04-11 03:28:28.023576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:29:17.783772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:29:17.784003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 03:29:17.784039 | orchestrator | 2026-04-11 03:29:17.784061 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 03:29:17.784081 | orchestrator | Saturday 11 April 2026 03:28:28 +0000 (0:00:01.885) 0:00:28.563 ******** 2026-04-11 03:29:17.784092 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:29:17.784104 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:29:17.784114 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:29:17.784125 | orchestrator | 2026-04-11 03:29:17.784136 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 03:29:17.784146 | orchestrator | Saturday 11 April 2026 03:28:28 +0000 (0:00:00.369) 0:00:28.933 ******** 2026-04-11 03:29:17.784157 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:29:17.784168 | orchestrator | 2026-04-11 03:29:17.784178 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-11 03:29:17.784189 | orchestrator | Saturday 11 April 2026 03:28:28 +0000 (0:00:00.580) 0:00:29.513 ******** 2026-04-11 03:29:17.784200 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:29:17.784210 | orchestrator | 2026-04-11 03:29:17.784274 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-11 03:29:17.784299 | orchestrator | Saturday 11 April 2026 03:28:31 +0000 (0:00:02.287) 0:00:31.800 ******** 2026-04-11 03:29:17.784312 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:29:17.784324 | orchestrator | 2026-04-11 03:29:17.784336 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-11 03:29:17.784349 | orchestrator | Saturday 11 April 2026 03:28:33 +0000 (0:00:02.621) 0:00:34.421 ******** 2026-04-11 03:29:17.784361 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:29:17.784373 | orchestrator | 2026-04-11 03:29:17.784384 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-11 03:29:17.784398 | orchestrator | Saturday 11 April 2026 03:28:50 +0000 (0:00:16.837) 0:00:51.259 ******** 2026-04-11 03:29:17.784410 | orchestrator | 2026-04-11 03:29:17.784423 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-11 03:29:17.784435 | orchestrator | Saturday 11 April 2026 03:28:50 +0000 (0:00:00.076) 0:00:51.335 ******** 2026-04-11 03:29:17.784448 | orchestrator | 2026-04-11 03:29:17.784459 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-11 03:29:17.784470 | orchestrator | Saturday 11 April 2026 03:28:50 +0000 (0:00:00.070) 0:00:51.406 ******** 2026-04-11 03:29:17.784481 | orchestrator | 2026-04-11 03:29:17.784492 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-11 03:29:17.784502 | orchestrator | Saturday 11 April 2026 03:28:50 +0000 (0:00:00.082) 0:00:51.489 ******** 2026-04-11 03:29:17.784513 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:29:17.784523 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:29:17.784534 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:29:17.784545 | orchestrator | 2026-04-11 03:29:17.784555 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:29:17.784567 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 03:29:17.784580 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-11 03:29:17.784591 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-11 03:29:17.784601 | orchestrator | 2026-04-11 03:29:17.784612 | orchestrator | 2026-04-11 03:29:17.784623 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:29:17.784634 | orchestrator | Saturday 11 April 2026 03:29:17 +0000 (0:00:26.806) 0:01:18.295 ******** 2026-04-11 03:29:17.784645 | orchestrator | =============================================================================== 2026-04-11 03:29:17.784673 | orchestrator | horizon : Restart horizon container ------------------------------------ 26.81s 2026-04-11 03:29:17.784686 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.84s 2026-04-11 03:29:17.784697 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.62s 2026-04-11 03:29:17.784707 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.29s 2026-04-11 03:29:17.784722 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.03s 2026-04-11 03:29:17.784740 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.02s 2026-04-11 03:29:17.784761 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.96s 2026-04-11 03:29:17.784789 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.89s 2026-04-11 03:29:17.784805 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.89s 2026-04-11 03:29:17.784822 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.60s 2026-04-11 03:29:17.784840 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.22s 2026-04-11 03:29:17.784856 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.90s 2026-04-11 03:29:17.784884 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2026-04-11 03:29:17.784914 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2026-04-11 03:29:18.233443 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-04-11 03:29:18.233517 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.62s 2026-04-11 03:29:18.233532 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2026-04-11 03:29:18.233547 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-04-11 03:29:18.233560 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-04-11 03:29:18.233575 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.57s 2026-04-11 03:29:20.901622 | orchestrator | 2026-04-11 03:29:20 | INFO  | Task 0e90c23f-4dcf-4458-b3af-72e0c940ed16 (skyline) was prepared for execution. 2026-04-11 03:29:20.901751 | orchestrator | 2026-04-11 03:29:20 | INFO  | It takes a moment until task 0e90c23f-4dcf-4458-b3af-72e0c940ed16 (skyline) has been started and output is visible here. 2026-04-11 03:29:52.583854 | orchestrator | 2026-04-11 03:29:52.583953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:29:52.583964 | orchestrator | 2026-04-11 03:29:52.583972 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:29:52.583979 | orchestrator | Saturday 11 April 2026 03:29:25 +0000 (0:00:00.310) 0:00:00.310 ******** 2026-04-11 03:29:52.583986 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:29:52.583993 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:29:52.584000 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:29:52.584006 | orchestrator | 2026-04-11 03:29:52.584013 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:29:52.584019 | orchestrator | Saturday 11 April 2026 03:29:25 +0000 (0:00:00.314) 0:00:00.625 ******** 2026-04-11 03:29:52.584026 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-11 03:29:52.584032 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-11 03:29:52.584039 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-11 03:29:52.584047 | orchestrator | 2026-04-11 03:29:52.584057 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-11 03:29:52.584067 | orchestrator | 2026-04-11 03:29:52.584073 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-11 03:29:52.584079 | orchestrator | Saturday 11 April 2026 03:29:26 +0000 (0:00:00.488) 0:00:01.113 ******** 2026-04-11 03:29:52.584086 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:29:52.584093 | orchestrator | 2026-04-11 03:29:52.584099 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-04-11 03:29:52.584106 | orchestrator | Saturday 11 April 2026 03:29:26 +0000 (0:00:00.590) 0:00:01.704 ******** 2026-04-11 03:29:52.584112 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-04-11 03:29:52.584118 | orchestrator | 2026-04-11 03:29:52.584125 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-04-11 03:29:52.584143 | orchestrator | Saturday 11 April 2026 03:29:30 +0000 (0:00:03.498) 0:00:05.203 ******** 2026-04-11 03:29:52.584158 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-04-11 03:29:52.584165 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-04-11 03:29:52.584171 | orchestrator | 2026-04-11 03:29:52.584186 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-04-11 03:29:52.584193 | orchestrator | Saturday 11 April 2026 03:29:36 +0000 (0:00:06.433) 0:00:11.636 ******** 2026-04-11 03:29:52.584246 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:29:52.584276 | orchestrator | 2026-04-11 03:29:52.584283 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-04-11 03:29:52.584289 | orchestrator | Saturday 11 April 2026 03:29:40 +0000 (0:00:03.120) 0:00:14.757 ******** 2026-04-11 03:29:52.584295 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:29:52.584302 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-04-11 03:29:52.584311 | orchestrator | 2026-04-11 03:29:52.584335 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-04-11 03:29:52.584345 | orchestrator | Saturday 11 April 2026 03:29:44 +0000 (0:00:04.072) 0:00:18.829 ******** 2026-04-11 03:29:52.584354 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:29:52.584364 | orchestrator | 2026-04-11 03:29:52.584374 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-04-11 03:29:52.584385 | orchestrator | Saturday 11 April 2026 03:29:47 +0000 (0:00:03.092) 0:00:21.922 ******** 2026-04-11 03:29:52.584396 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-04-11 03:29:52.584406 | orchestrator | 2026-04-11 03:29:52.584416 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-11 03:29:52.584426 | orchestrator | Saturday 11 April 2026 03:29:51 +0000 (0:00:04.043) 0:00:25.965 ******** 2026-04-11 03:29:52.584440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:52.584476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:52.584489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:52.584516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:52.584529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:52.584552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:56.468114 | orchestrator | 2026-04-11 03:29:56.468248 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-11 03:29:56.468264 | orchestrator | Saturday 11 April 2026 03:29:52 +0000 (0:00:01.347) 0:00:27.312 ******** 2026-04-11 03:29:56.468273 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:29:56.468280 | orchestrator | 2026-04-11 03:29:56.468286 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-11 03:29:56.468292 | orchestrator | Saturday 11 April 2026 03:29:53 +0000 (0:00:00.780) 0:00:28.092 ******** 2026-04-11 03:29:56.468301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:56.468344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:56.468351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:56.468374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:56.468381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:56.468392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:29:56.468397 | orchestrator | 2026-04-11 03:29:56.468407 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-11 03:29:56.468413 | orchestrator | Saturday 11 April 2026 03:29:55 +0000 (0:00:02.452) 0:00:30.545 ******** 2026-04-11 03:29:56.468419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 03:29:56.468426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 03:29:56.468432 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:29:56.468443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.934898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.935017 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:29:57.935051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.935065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.935076 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:29:57.935086 | orchestrator | 2026-04-11 03:29:57.935098 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-11 03:29:57.935109 | orchestrator | Saturday 11 April 2026 03:29:56 +0000 (0:00:00.655) 0:00:31.200 ******** 2026-04-11 03:29:57.935120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.935169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.935181 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:29:57.935241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.935255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.935265 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:29:57.935275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-11 03:29:57.935302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-11 03:30:07.029656 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:30:07.029750 | orchestrator | 2026-04-11 03:30:07.029758 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-11 03:30:07.029764 | orchestrator | Saturday 11 April 2026 03:29:57 +0000 (0:00:01.458) 0:00:32.659 ******** 2026-04-11 03:30:07.029782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:07.029789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:07.029795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:07.029817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:07.029837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:07.029841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:07.029846 | orchestrator | 2026-04-11 03:30:07.029850 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-11 03:30:07.029854 | orchestrator | Saturday 11 April 2026 03:30:00 +0000 (0:00:02.612) 0:00:35.272 ******** 2026-04-11 03:30:07.029858 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-11 03:30:07.029862 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-11 03:30:07.029865 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-11 03:30:07.029873 | orchestrator | 2026-04-11 03:30:07.029877 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-11 03:30:07.029880 | orchestrator | Saturday 11 April 2026 03:30:02 +0000 (0:00:01.682) 0:00:36.954 ******** 2026-04-11 03:30:07.029884 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-11 03:30:07.029888 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-11 03:30:07.029892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-11 03:30:07.029896 | orchestrator | 2026-04-11 03:30:07.029900 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-11 03:30:07.029904 | orchestrator | Saturday 11 April 2026 03:30:04 +0000 (0:00:02.280) 0:00:39.234 ******** 2026-04-11 03:30:07.029908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:07.029919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324477 | orchestrator | 2026-04-11 03:30:09.324490 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-11 03:30:09.324503 | orchestrator | Saturday 11 April 2026 03:30:07 +0000 (0:00:02.530) 0:00:41.764 ******** 2026-04-11 03:30:09.324515 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:30:09.324528 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:30:09.324555 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:30:09.324568 | orchestrator | 2026-04-11 03:30:09.324600 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-04-11 03:30:09.324613 | orchestrator | Saturday 11 April 2026 03:30:07 +0000 (0:00:00.360) 0:00:42.125 ******** 2026-04-11 03:30:09.324624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:09.324698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:49.526743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-11 03:30:49.526829 | orchestrator | 2026-04-11 03:30:49.526838 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-04-11 03:30:49.526843 | orchestrator | Saturday 11 April 2026 03:30:09 +0000 (0:00:01.929) 0:00:44.055 ******** 2026-04-11 03:30:49.526848 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:30:49.526854 | orchestrator | 2026-04-11 03:30:49.526858 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-04-11 03:30:49.526863 | orchestrator | Saturday 11 April 2026 03:30:11 +0000 (0:00:02.096) 0:00:46.151 ******** 2026-04-11 03:30:49.526867 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:30:49.526872 | orchestrator | 2026-04-11 03:30:49.526876 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-04-11 03:30:49.526881 | orchestrator | Saturday 11 April 2026 03:30:13 +0000 (0:00:02.276) 0:00:48.428 ******** 2026-04-11 03:30:49.526885 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:30:49.526890 | orchestrator | 2026-04-11 03:30:49.526894 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-11 03:30:49.526899 | orchestrator | Saturday 11 April 2026 03:30:21 +0000 (0:00:07.996) 0:00:56.424 ******** 2026-04-11 03:30:49.526903 | orchestrator | 2026-04-11 03:30:49.526908 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-11 03:30:49.526912 | orchestrator | Saturday 11 April 2026 03:30:21 +0000 (0:00:00.089) 0:00:56.514 ******** 2026-04-11 03:30:49.526917 | orchestrator | 2026-04-11 03:30:49.526921 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-11 03:30:49.526925 | orchestrator | Saturday 11 April 2026 03:30:21 +0000 (0:00:00.072) 0:00:56.586 ******** 2026-04-11 03:30:49.526930 | orchestrator | 2026-04-11 03:30:49.526934 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-11 03:30:49.526939 | orchestrator | Saturday 11 April 2026 03:30:21 +0000 (0:00:00.074) 0:00:56.661 ******** 2026-04-11 03:30:49.526943 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:30:49.526947 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:30:49.526952 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:30:49.526956 | orchestrator | 2026-04-11 03:30:49.526961 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-11 03:30:49.526965 | orchestrator | Saturday 11 April 2026 03:30:33 +0000 (0:00:12.001) 0:01:08.663 ******** 2026-04-11 03:30:49.526970 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:30:49.526974 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:30:49.526979 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:30:49.526983 | orchestrator | 2026-04-11 03:30:49.526987 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:30:49.526993 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 03:30:49.526999 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 03:30:49.527019 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 03:30:49.527023 | orchestrator | 2026-04-11 03:30:49.527028 | orchestrator | 2026-04-11 03:30:49.527032 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:30:49.527037 | orchestrator | Saturday 11 April 2026 03:30:49 +0000 (0:00:15.224) 0:01:23.888 ******** 2026-04-11 03:30:49.527051 | orchestrator | =============================================================================== 2026-04-11 03:30:49.527055 | orchestrator | skyline : Restart skyline-console container ---------------------------- 15.22s 2026-04-11 03:30:49.527060 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 12.00s 2026-04-11 03:30:49.527064 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 8.00s 2026-04-11 03:30:49.527069 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.43s 2026-04-11 03:30:49.527073 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.07s 2026-04-11 03:30:49.527078 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 4.04s 2026-04-11 03:30:49.527082 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.50s 2026-04-11 03:30:49.527086 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.12s 2026-04-11 03:30:49.527100 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.09s 2026-04-11 03:30:49.527105 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.61s 2026-04-11 03:30:49.527109 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.53s 2026-04-11 03:30:49.527114 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.45s 2026-04-11 03:30:49.527118 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.28s 2026-04-11 03:30:49.527123 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.28s 2026-04-11 03:30:49.527127 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.10s 2026-04-11 03:30:49.527131 | orchestrator | skyline : Check skyline container --------------------------------------- 1.93s 2026-04-11 03:30:49.527136 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.68s 2026-04-11 03:30:49.527140 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.46s 2026-04-11 03:30:49.527145 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.35s 2026-04-11 03:30:49.527149 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.78s 2026-04-11 03:30:52.093111 | orchestrator | 2026-04-11 03:30:52 | INFO  | Task 1dc72879-d8a0-4b3f-a74f-03fc31108d85 (glance) was prepared for execution. 2026-04-11 03:30:52.093260 | orchestrator | 2026-04-11 03:30:52 | INFO  | It takes a moment until task 1dc72879-d8a0-4b3f-a74f-03fc31108d85 (glance) has been started and output is visible here. 2026-04-11 03:31:27.136338 | orchestrator | 2026-04-11 03:31:27.136442 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:31:27.136456 | orchestrator | 2026-04-11 03:31:27.136463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:31:27.136471 | orchestrator | Saturday 11 April 2026 03:30:56 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-04-11 03:31:27.136478 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:31:27.136486 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:31:27.136492 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:31:27.136498 | orchestrator | 2026-04-11 03:31:27.136505 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:31:27.136511 | orchestrator | Saturday 11 April 2026 03:30:57 +0000 (0:00:00.336) 0:00:00.618 ******** 2026-04-11 03:31:27.136517 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-11 03:31:27.136524 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-11 03:31:27.136555 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-11 03:31:27.136562 | orchestrator | 2026-04-11 03:31:27.136569 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-11 03:31:27.136575 | orchestrator | 2026-04-11 03:31:27.136581 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 03:31:27.136587 | orchestrator | Saturday 11 April 2026 03:30:57 +0000 (0:00:00.495) 0:00:01.113 ******** 2026-04-11 03:31:27.136593 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:31:27.136600 | orchestrator | 2026-04-11 03:31:27.136608 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-11 03:31:27.136614 | orchestrator | Saturday 11 April 2026 03:30:58 +0000 (0:00:00.617) 0:00:01.731 ******** 2026-04-11 03:31:27.136621 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-11 03:31:27.136629 | orchestrator | 2026-04-11 03:31:27.136635 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-11 03:31:27.136642 | orchestrator | Saturday 11 April 2026 03:31:01 +0000 (0:00:03.388) 0:00:05.119 ******** 2026-04-11 03:31:27.136649 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-11 03:31:27.136657 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-11 03:31:27.136664 | orchestrator | 2026-04-11 03:31:27.136671 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-11 03:31:27.136678 | orchestrator | Saturday 11 April 2026 03:31:07 +0000 (0:00:06.307) 0:00:11.427 ******** 2026-04-11 03:31:27.136685 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:31:27.136693 | orchestrator | 2026-04-11 03:31:27.136700 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-11 03:31:27.136706 | orchestrator | Saturday 11 April 2026 03:31:11 +0000 (0:00:03.197) 0:00:14.625 ******** 2026-04-11 03:31:27.136713 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:31:27.136733 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-11 03:31:27.136742 | orchestrator | 2026-04-11 03:31:27.136749 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-11 03:31:27.136755 | orchestrator | Saturday 11 April 2026 03:31:15 +0000 (0:00:04.125) 0:00:18.751 ******** 2026-04-11 03:31:27.136762 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:31:27.136768 | orchestrator | 2026-04-11 03:31:27.136774 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-11 03:31:27.136779 | orchestrator | Saturday 11 April 2026 03:31:18 +0000 (0:00:03.170) 0:00:21.922 ******** 2026-04-11 03:31:27.136785 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-11 03:31:27.136791 | orchestrator | 2026-04-11 03:31:27.136796 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-11 03:31:27.136802 | orchestrator | Saturday 11 April 2026 03:31:22 +0000 (0:00:03.852) 0:00:25.775 ******** 2026-04-11 03:31:27.136834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:31:27.136853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:31:27.136866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:31:27.136878 | orchestrator | 2026-04-11 03:31:27.136885 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 03:31:27.136893 | orchestrator | Saturday 11 April 2026 03:31:26 +0000 (0:00:04.006) 0:00:29.781 ******** 2026-04-11 03:31:27.136900 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:31:27.136908 | orchestrator | 2026-04-11 03:31:27.136921 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-11 03:31:44.174380 | orchestrator | Saturday 11 April 2026 03:31:27 +0000 (0:00:00.816) 0:00:30.598 ******** 2026-04-11 03:31:44.174481 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:31:44.174495 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:31:44.174501 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:31:44.174507 | orchestrator | 2026-04-11 03:31:44.174515 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-11 03:31:44.174522 | orchestrator | Saturday 11 April 2026 03:31:31 +0000 (0:00:03.977) 0:00:34.575 ******** 2026-04-11 03:31:44.174530 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:31:44.174538 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:31:44.174544 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:31:44.174550 | orchestrator | 2026-04-11 03:31:44.174556 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-11 03:31:44.174562 | orchestrator | Saturday 11 April 2026 03:31:32 +0000 (0:00:01.594) 0:00:36.170 ******** 2026-04-11 03:31:44.174568 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:31:44.174574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:31:44.174581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:31:44.174587 | orchestrator | 2026-04-11 03:31:44.174593 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-11 03:31:44.174599 | orchestrator | Saturday 11 April 2026 03:31:34 +0000 (0:00:01.444) 0:00:37.614 ******** 2026-04-11 03:31:44.174606 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:31:44.174613 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:31:44.174620 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:31:44.174626 | orchestrator | 2026-04-11 03:31:44.174633 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-11 03:31:44.174639 | orchestrator | Saturday 11 April 2026 03:31:34 +0000 (0:00:00.727) 0:00:38.342 ******** 2026-04-11 03:31:44.174645 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:31:44.174651 | orchestrator | 2026-04-11 03:31:44.174658 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-11 03:31:44.174664 | orchestrator | Saturday 11 April 2026 03:31:35 +0000 (0:00:00.144) 0:00:38.487 ******** 2026-04-11 03:31:44.174671 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:31:44.174678 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:31:44.174684 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:31:44.174690 | orchestrator | 2026-04-11 03:31:44.174696 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 03:31:44.174718 | orchestrator | Saturday 11 April 2026 03:31:35 +0000 (0:00:00.338) 0:00:38.825 ******** 2026-04-11 03:31:44.174725 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:31:44.174731 | orchestrator | 2026-04-11 03:31:44.174738 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-11 03:31:44.174762 | orchestrator | Saturday 11 April 2026 03:31:36 +0000 (0:00:00.838) 0:00:39.664 ******** 2026-04-11 03:31:44.174775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:31:44.174802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:31:44.174833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:31:44.174846 | orchestrator | 2026-04-11 03:31:44.174853 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-11 03:31:44.174859 | orchestrator | Saturday 11 April 2026 03:31:40 +0000 (0:00:04.312) 0:00:43.976 ******** 2026-04-11 03:31:44.174873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 03:31:48.146700 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:31:48.146838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 03:31:48.146890 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:31:48.146908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 03:31:48.146921 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:31:48.146934 | orchestrator | 2026-04-11 03:31:48.146950 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-11 03:31:48.146965 | orchestrator | Saturday 11 April 2026 03:31:44 +0000 (0:00:03.659) 0:00:47.636 ******** 2026-04-11 03:31:48.147008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 03:31:48.147027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 03:31:48.147037 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:31:48.147045 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:31:48.147062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 03:32:26.648283 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:32:26.648382 | orchestrator | 2026-04-11 03:32:26.648395 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-11 03:32:26.648405 | orchestrator | Saturday 11 April 2026 03:31:48 +0000 (0:00:03.970) 0:00:51.607 ******** 2026-04-11 03:32:26.648412 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:32:26.648434 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:32:26.648441 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:32:26.648447 | orchestrator | 2026-04-11 03:32:26.648454 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-11 03:32:26.648460 | orchestrator | Saturday 11 April 2026 03:31:51 +0000 (0:00:03.654) 0:00:55.261 ******** 2026-04-11 03:32:26.648471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:32:26.648482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:32:26.648530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:32:26.648540 | orchestrator | 2026-04-11 03:32:26.648547 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-11 03:32:26.648554 | orchestrator | Saturday 11 April 2026 03:31:56 +0000 (0:00:04.248) 0:00:59.509 ******** 2026-04-11 03:32:26.648559 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:32:26.648563 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:32:26.648566 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:32:26.648570 | orchestrator | 2026-04-11 03:32:26.648574 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-11 03:32:26.648578 | orchestrator | Saturday 11 April 2026 03:32:02 +0000 (0:00:06.255) 0:01:05.765 ******** 2026-04-11 03:32:26.648582 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:32:26.648585 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:32:26.648589 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:32:26.648593 | orchestrator | 2026-04-11 03:32:26.648597 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-11 03:32:26.648600 | orchestrator | Saturday 11 April 2026 03:32:06 +0000 (0:00:03.918) 0:01:09.683 ******** 2026-04-11 03:32:26.648604 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:32:26.648608 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:32:26.648612 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:32:26.648615 | orchestrator | 2026-04-11 03:32:26.648619 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-11 03:32:26.648623 | orchestrator | Saturday 11 April 2026 03:32:09 +0000 (0:00:03.573) 0:01:13.257 ******** 2026-04-11 03:32:26.648627 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:32:26.648630 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:32:26.648634 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:32:26.648638 | orchestrator | 2026-04-11 03:32:26.648642 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-11 03:32:26.648645 | orchestrator | Saturday 11 April 2026 03:32:13 +0000 (0:00:03.927) 0:01:17.184 ******** 2026-04-11 03:32:26.648654 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:32:26.648658 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:32:26.648662 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:32:26.648666 | orchestrator | 2026-04-11 03:32:26.648670 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-11 03:32:26.648673 | orchestrator | Saturday 11 April 2026 03:32:17 +0000 (0:00:04.043) 0:01:21.227 ******** 2026-04-11 03:32:26.648677 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:32:26.648681 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:32:26.648685 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:32:26.648689 | orchestrator | 2026-04-11 03:32:26.648693 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-11 03:32:26.648696 | orchestrator | Saturday 11 April 2026 03:32:18 +0000 (0:00:00.654) 0:01:21.882 ******** 2026-04-11 03:32:26.648700 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-11 03:32:26.648705 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:32:26.648709 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-11 03:32:26.648713 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:32:26.648717 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-11 03:32:26.648720 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:32:26.648724 | orchestrator | 2026-04-11 03:32:26.648728 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-11 03:32:26.648732 | orchestrator | Saturday 11 April 2026 03:32:22 +0000 (0:00:03.637) 0:01:25.520 ******** 2026-04-11 03:32:26.648736 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:32:26.648739 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:32:26.648746 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:32:26.648752 | orchestrator | 2026-04-11 03:32:26.648762 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-11 03:32:26.648773 | orchestrator | Saturday 11 April 2026 03:32:26 +0000 (0:00:04.590) 0:01:30.110 ******** 2026-04-11 03:33:47.194366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:33:47.194444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:33:47.194479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 03:33:47.194485 | orchestrator | 2026-04-11 03:33:47.194490 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 03:33:47.194495 | orchestrator | Saturday 11 April 2026 03:32:30 +0000 (0:00:04.041) 0:01:34.152 ******** 2026-04-11 03:33:47.194499 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:33:47.194512 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:33:47.194516 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:33:47.194520 | orchestrator | 2026-04-11 03:33:47.194530 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-11 03:33:47.194533 | orchestrator | Saturday 11 April 2026 03:32:31 +0000 (0:00:00.565) 0:01:34.718 ******** 2026-04-11 03:33:47.194537 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:33:47.194541 | orchestrator | 2026-04-11 03:33:47.194549 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-11 03:33:47.194553 | orchestrator | Saturday 11 April 2026 03:32:33 +0000 (0:00:02.178) 0:01:36.896 ******** 2026-04-11 03:33:47.194557 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:33:47.194561 | orchestrator | 2026-04-11 03:33:47.194565 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-11 03:33:47.194568 | orchestrator | Saturday 11 April 2026 03:32:35 +0000 (0:00:02.137) 0:01:39.034 ******** 2026-04-11 03:33:47.194572 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:33:47.194576 | orchestrator | 2026-04-11 03:33:47.194580 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-11 03:33:47.194583 | orchestrator | Saturday 11 April 2026 03:32:37 +0000 (0:00:02.122) 0:01:41.156 ******** 2026-04-11 03:33:47.194587 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:33:47.194591 | orchestrator | 2026-04-11 03:33:47.194595 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-11 03:33:47.194602 | orchestrator | Saturday 11 April 2026 03:33:06 +0000 (0:00:28.617) 0:02:09.774 ******** 2026-04-11 03:33:47.194608 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:33:47.194614 | orchestrator | 2026-04-11 03:33:47.194621 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-11 03:33:47.194627 | orchestrator | Saturday 11 April 2026 03:33:08 +0000 (0:00:02.070) 0:02:11.845 ******** 2026-04-11 03:33:47.194633 | orchestrator | 2026-04-11 03:33:47.194640 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-11 03:33:47.194646 | orchestrator | Saturday 11 April 2026 03:33:08 +0000 (0:00:00.074) 0:02:11.919 ******** 2026-04-11 03:33:47.194653 | orchestrator | 2026-04-11 03:33:47.194660 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-11 03:33:47.194665 | orchestrator | Saturday 11 April 2026 03:33:08 +0000 (0:00:00.075) 0:02:11.995 ******** 2026-04-11 03:33:47.194669 | orchestrator | 2026-04-11 03:33:47.194672 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-11 03:33:47.194676 | orchestrator | Saturday 11 April 2026 03:33:08 +0000 (0:00:00.073) 0:02:12.068 ******** 2026-04-11 03:33:47.194680 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:33:47.194684 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:33:47.194687 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:33:47.194691 | orchestrator | 2026-04-11 03:33:47.194695 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:33:47.194700 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-11 03:33:47.194705 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-11 03:33:47.194709 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-11 03:33:47.194713 | orchestrator | 2026-04-11 03:33:47.194716 | orchestrator | 2026-04-11 03:33:47.194720 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:33:47.194724 | orchestrator | Saturday 11 April 2026 03:33:47 +0000 (0:00:38.580) 0:02:50.648 ******** 2026-04-11 03:33:47.194727 | orchestrator | =============================================================================== 2026-04-11 03:33:47.194731 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.58s 2026-04-11 03:33:47.194735 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.62s 2026-04-11 03:33:47.194739 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.31s 2026-04-11 03:33:47.194746 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.26s 2026-04-11 03:33:47.616389 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.59s 2026-04-11 03:33:47.616528 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.31s 2026-04-11 03:33:47.616575 | orchestrator | glance : Copying over config.json files for services -------------------- 4.25s 2026-04-11 03:33:47.616591 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.13s 2026-04-11 03:33:47.616607 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.04s 2026-04-11 03:33:47.616620 | orchestrator | glance : Check glance containers ---------------------------------------- 4.04s 2026-04-11 03:33:47.616636 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.01s 2026-04-11 03:33:47.616652 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.98s 2026-04-11 03:33:47.616666 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.97s 2026-04-11 03:33:47.616681 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.93s 2026-04-11 03:33:47.616692 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.92s 2026-04-11 03:33:47.616700 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.85s 2026-04-11 03:33:47.616709 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.66s 2026-04-11 03:33:47.616718 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.65s 2026-04-11 03:33:47.616727 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.64s 2026-04-11 03:33:47.616735 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.57s 2026-04-11 03:33:50.225792 | orchestrator | 2026-04-11 03:33:50 | INFO  | Task f30590d4-f971-428e-ac6f-5b8e94753162 (cinder) was prepared for execution. 2026-04-11 03:33:50.225881 | orchestrator | 2026-04-11 03:33:50 | INFO  | It takes a moment until task f30590d4-f971-428e-ac6f-5b8e94753162 (cinder) has been started and output is visible here. 2026-04-11 03:34:25.891480 | orchestrator | 2026-04-11 03:34:25.891575 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:34:25.891586 | orchestrator | 2026-04-11 03:34:25.891593 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:34:25.891600 | orchestrator | Saturday 11 April 2026 03:33:54 +0000 (0:00:00.302) 0:00:00.302 ******** 2026-04-11 03:34:25.891606 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:34:25.891613 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:34:25.891619 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:34:25.891625 | orchestrator | 2026-04-11 03:34:25.891631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:34:25.891641 | orchestrator | Saturday 11 April 2026 03:33:55 +0000 (0:00:00.362) 0:00:00.665 ******** 2026-04-11 03:34:25.891651 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-11 03:34:25.891661 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-11 03:34:25.891670 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-11 03:34:25.891680 | orchestrator | 2026-04-11 03:34:25.891689 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-11 03:34:25.891699 | orchestrator | 2026-04-11 03:34:25.891707 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 03:34:25.891716 | orchestrator | Saturday 11 April 2026 03:33:55 +0000 (0:00:00.511) 0:00:01.176 ******** 2026-04-11 03:34:25.891726 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:34:25.891737 | orchestrator | 2026-04-11 03:34:25.891748 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-11 03:34:25.891758 | orchestrator | Saturday 11 April 2026 03:33:56 +0000 (0:00:00.629) 0:00:01.806 ******** 2026-04-11 03:34:25.891769 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-11 03:34:25.891778 | orchestrator | 2026-04-11 03:34:25.891788 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-11 03:34:25.891823 | orchestrator | Saturday 11 April 2026 03:33:59 +0000 (0:00:03.463) 0:00:05.269 ******** 2026-04-11 03:34:25.891836 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-11 03:34:25.891846 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-11 03:34:25.891854 | orchestrator | 2026-04-11 03:34:25.891864 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-11 03:34:25.891874 | orchestrator | Saturday 11 April 2026 03:34:06 +0000 (0:00:06.313) 0:00:11.582 ******** 2026-04-11 03:34:25.891884 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:34:25.891895 | orchestrator | 2026-04-11 03:34:25.891905 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-11 03:34:25.891914 | orchestrator | Saturday 11 April 2026 03:34:09 +0000 (0:00:03.146) 0:00:14.729 ******** 2026-04-11 03:34:25.891924 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:34:25.891934 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-11 03:34:25.891944 | orchestrator | 2026-04-11 03:34:25.891954 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-11 03:34:25.891963 | orchestrator | Saturday 11 April 2026 03:34:13 +0000 (0:00:03.988) 0:00:18.718 ******** 2026-04-11 03:34:25.891972 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:34:25.891982 | orchestrator | 2026-04-11 03:34:25.891992 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-11 03:34:25.892002 | orchestrator | Saturday 11 April 2026 03:34:16 +0000 (0:00:03.295) 0:00:22.013 ******** 2026-04-11 03:34:25.892027 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-11 03:34:25.892039 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-11 03:34:25.892049 | orchestrator | 2026-04-11 03:34:25.892059 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-11 03:34:25.892070 | orchestrator | Saturday 11 April 2026 03:34:23 +0000 (0:00:07.317) 0:00:29.330 ******** 2026-04-11 03:34:25.892083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:25.892118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:25.892190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:25.892204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:25.892225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:25.892237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:25.892249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:25.892271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:32.088583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:32.088677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:32.088703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:32.088711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:32.088721 | orchestrator | 2026-04-11 03:34:32.088727 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 03:34:32.088732 | orchestrator | Saturday 11 April 2026 03:34:25 +0000 (0:00:02.089) 0:00:31.420 ******** 2026-04-11 03:34:32.088737 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:34:32.088741 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:34:32.088745 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:34:32.088749 | orchestrator | 2026-04-11 03:34:32.088754 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 03:34:32.088758 | orchestrator | Saturday 11 April 2026 03:34:26 +0000 (0:00:00.575) 0:00:31.996 ******** 2026-04-11 03:34:32.088762 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:34:32.088767 | orchestrator | 2026-04-11 03:34:32.088786 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-11 03:34:32.088790 | orchestrator | Saturday 11 April 2026 03:34:27 +0000 (0:00:00.589) 0:00:32.585 ******** 2026-04-11 03:34:32.088795 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-11 03:34:32.088799 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-11 03:34:32.088803 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-11 03:34:32.088807 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-11 03:34:32.088811 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-11 03:34:32.088815 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-11 03:34:32.088819 | orchestrator | 2026-04-11 03:34:32.088823 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-11 03:34:32.088827 | orchestrator | Saturday 11 April 2026 03:34:28 +0000 (0:00:01.698) 0:00:34.284 ******** 2026-04-11 03:34:32.088843 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-11 03:34:32.088850 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-11 03:34:32.088859 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-11 03:34:32.088863 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-11 03:34:32.088874 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-11 03:34:43.337340 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-11 03:34:43.337446 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-11 03:34:43.337481 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-11 03:34:43.337493 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-11 03:34:43.337525 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-11 03:34:43.337554 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-11 03:34:43.337565 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-11 03:34:43.337576 | orchestrator | 2026-04-11 03:34:43.337587 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-11 03:34:43.337599 | orchestrator | Saturday 11 April 2026 03:34:32 +0000 (0:00:03.589) 0:00:37.874 ******** 2026-04-11 03:34:43.337609 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:34:43.337620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:34:43.337630 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-11 03:34:43.337640 | orchestrator | 2026-04-11 03:34:43.337649 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-11 03:34:43.337665 | orchestrator | Saturday 11 April 2026 03:34:34 +0000 (0:00:01.653) 0:00:39.527 ******** 2026-04-11 03:34:43.337676 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-11 03:34:43.337686 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-11 03:34:43.337696 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-11 03:34:43.337706 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 03:34:43.337716 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 03:34:43.337734 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-11 03:34:43.337743 | orchestrator | 2026-04-11 03:34:43.337753 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-11 03:34:43.337762 | orchestrator | Saturday 11 April 2026 03:34:36 +0000 (0:00:02.729) 0:00:42.256 ******** 2026-04-11 03:34:43.337773 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-11 03:34:43.337783 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-11 03:34:43.337793 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-11 03:34:43.337803 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-11 03:34:43.337817 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-11 03:34:43.337835 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-11 03:34:43.337853 | orchestrator | 2026-04-11 03:34:43.337872 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-11 03:34:43.337888 | orchestrator | Saturday 11 April 2026 03:34:37 +0000 (0:00:01.012) 0:00:43.269 ******** 2026-04-11 03:34:43.337899 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:34:43.337911 | orchestrator | 2026-04-11 03:34:43.337923 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-11 03:34:43.337934 | orchestrator | Saturday 11 April 2026 03:34:37 +0000 (0:00:00.142) 0:00:43.412 ******** 2026-04-11 03:34:43.337948 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:34:43.337963 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:34:43.337980 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:34:43.337995 | orchestrator | 2026-04-11 03:34:43.338011 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 03:34:43.338095 | orchestrator | Saturday 11 April 2026 03:34:38 +0000 (0:00:00.573) 0:00:43.986 ******** 2026-04-11 03:34:43.338114 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:34:43.338156 | orchestrator | 2026-04-11 03:34:43.338174 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-11 03:34:43.338192 | orchestrator | Saturday 11 April 2026 03:34:39 +0000 (0:00:00.643) 0:00:44.629 ******** 2026-04-11 03:34:43.338226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:44.427122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:44.427303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:44.427318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:44.427435 | orchestrator | 2026-04-11 03:34:44.427445 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-11 03:34:44.427456 | orchestrator | Saturday 11 April 2026 03:34:43 +0000 (0:00:04.260) 0:00:48.890 ******** 2026-04-11 03:34:44.427472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:34:44.531779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.531881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.531909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.531930 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:34:44.531942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:34:44.531953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.532010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.532032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.532040 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:34:44.532047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:34:44.532053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.532060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.532067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:34:44.532079 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:34:44.532085 | orchestrator | 2026-04-11 03:34:44.532092 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-11 03:34:44.532104 | orchestrator | Saturday 11 April 2026 03:34:44 +0000 (0:00:01.073) 0:00:49.964 ******** 2026-04-11 03:34:45.185930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:34:45.186241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:34:45.186282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:34:45.186308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:34:45.186332 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:34:45.186390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:34:45.186452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:34:45.186480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:34:45.186505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:34:45.186544 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:34:45.186566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:34:45.186590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:34:45.186638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:34:50.058673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:34:50.058780 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:34:50.058797 | orchestrator | 2026-04-11 03:34:50.058809 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-11 03:34:50.058821 | orchestrator | Saturday 11 April 2026 03:34:45 +0000 (0:00:01.065) 0:00:51.030 ******** 2026-04-11 03:34:50.058832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:50.058846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:50.058877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:34:50.058908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:50.058942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:50.058968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:50.058985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:50.059005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:50.059037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:34:50.059067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:03.637468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:03.637565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:03.637576 | orchestrator | 2026-04-11 03:35:03.637584 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-11 03:35:03.637592 | orchestrator | Saturday 11 April 2026 03:34:50 +0000 (0:00:04.568) 0:00:55.599 ******** 2026-04-11 03:35:03.637599 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-11 03:35:03.637606 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-11 03:35:03.637612 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-11 03:35:03.637619 | orchestrator | 2026-04-11 03:35:03.637645 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-11 03:35:03.637651 | orchestrator | Saturday 11 April 2026 03:34:52 +0000 (0:00:02.011) 0:00:57.610 ******** 2026-04-11 03:35:03.637658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:35:03.637667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:35:03.637695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:35:03.637704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:03.637711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:03.637725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:03.637732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:03.637739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:03.637753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:06.285512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:06.285630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:06.285686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:06.285706 | orchestrator | 2026-04-11 03:35:06.285727 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-11 03:35:06.285747 | orchestrator | Saturday 11 April 2026 03:35:03 +0000 (0:00:11.555) 0:01:09.165 ******** 2026-04-11 03:35:06.285765 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:35:06.285784 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:35:06.285801 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:35:06.285819 | orchestrator | 2026-04-11 03:35:06.285838 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-11 03:35:06.285856 | orchestrator | Saturday 11 April 2026 03:35:05 +0000 (0:00:01.562) 0:01:10.728 ******** 2026-04-11 03:35:06.285876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:35:06.285938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:35:06.285962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:35:06.286087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:35:06.286110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:35:06.286152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:35:06.286172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:35:06.286189 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:35:06.286226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:35:09.989969 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:35:09.990102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-11 03:35:09.990114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:35:09.990120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 03:35:09.990143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 03:35:09.990148 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:35:09.990153 | orchestrator | 2026-04-11 03:35:09.990165 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-11 03:35:09.990171 | orchestrator | Saturday 11 April 2026 03:35:06 +0000 (0:00:01.109) 0:01:11.837 ******** 2026-04-11 03:35:09.990175 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:35:09.990179 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:35:09.990190 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:35:09.990194 | orchestrator | 2026-04-11 03:35:09.990198 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-11 03:35:09.990213 | orchestrator | Saturday 11 April 2026 03:35:07 +0000 (0:00:00.633) 0:01:12.471 ******** 2026-04-11 03:35:09.990228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:35:09.990242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:35:09.990250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-11 03:35:09.990258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:09.990266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:09.990277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:35:09.990295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:36:43.232323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:36:43.232428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 03:36:43.232440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:36:43.232464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:36:43.232494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 03:36:43.232510 | orchestrator | 2026-04-11 03:36:43.232532 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 03:36:43.232550 | orchestrator | Saturday 11 April 2026 03:35:10 +0000 (0:00:03.056) 0:01:15.527 ******** 2026-04-11 03:36:43.232564 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:36:43.232579 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:36:43.232591 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:36:43.232604 | orchestrator | 2026-04-11 03:36:43.232617 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-11 03:36:43.232631 | orchestrator | Saturday 11 April 2026 03:35:10 +0000 (0:00:00.320) 0:01:15.847 ******** 2026-04-11 03:36:43.232645 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:36:43.232659 | orchestrator | 2026-04-11 03:36:43.232692 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-11 03:36:43.232707 | orchestrator | Saturday 11 April 2026 03:35:12 +0000 (0:00:02.102) 0:01:17.950 ******** 2026-04-11 03:36:43.232722 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:36:43.232736 | orchestrator | 2026-04-11 03:36:43.232750 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-11 03:36:43.232765 | orchestrator | Saturday 11 April 2026 03:35:14 +0000 (0:00:02.201) 0:01:20.151 ******** 2026-04-11 03:36:43.232779 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:36:43.232790 | orchestrator | 2026-04-11 03:36:43.232798 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-11 03:36:43.232806 | orchestrator | Saturday 11 April 2026 03:35:34 +0000 (0:00:19.650) 0:01:39.802 ******** 2026-04-11 03:36:43.232814 | orchestrator | 2026-04-11 03:36:43.232822 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-11 03:36:43.232830 | orchestrator | Saturday 11 April 2026 03:35:34 +0000 (0:00:00.073) 0:01:39.875 ******** 2026-04-11 03:36:43.232837 | orchestrator | 2026-04-11 03:36:43.232846 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-11 03:36:43.232855 | orchestrator | Saturday 11 April 2026 03:35:34 +0000 (0:00:00.086) 0:01:39.962 ******** 2026-04-11 03:36:43.232864 | orchestrator | 2026-04-11 03:36:43.232873 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-11 03:36:43.232884 | orchestrator | Saturday 11 April 2026 03:35:34 +0000 (0:00:00.077) 0:01:40.039 ******** 2026-04-11 03:36:43.232898 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:36:43.232918 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:36:43.232933 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:36:43.232945 | orchestrator | 2026-04-11 03:36:43.232958 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-11 03:36:43.232973 | orchestrator | Saturday 11 April 2026 03:36:02 +0000 (0:00:27.557) 0:02:07.597 ******** 2026-04-11 03:36:43.232987 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:36:43.233001 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:36:43.233014 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:36:43.233028 | orchestrator | 2026-04-11 03:36:43.233041 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-11 03:36:43.233056 | orchestrator | Saturday 11 April 2026 03:36:07 +0000 (0:00:05.641) 0:02:13.238 ******** 2026-04-11 03:36:43.233100 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:36:43.233145 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:36:43.233156 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:36:43.233165 | orchestrator | 2026-04-11 03:36:43.233174 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-11 03:36:43.233183 | orchestrator | Saturday 11 April 2026 03:36:36 +0000 (0:00:28.629) 0:02:41.868 ******** 2026-04-11 03:36:43.233192 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:36:43.233202 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:36:43.233211 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:36:43.233220 | orchestrator | 2026-04-11 03:36:43.233228 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-11 03:36:43.233237 | orchestrator | Saturday 11 April 2026 03:36:42 +0000 (0:00:06.430) 0:02:48.299 ******** 2026-04-11 03:36:43.233245 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:36:43.233252 | orchestrator | 2026-04-11 03:36:43.233260 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:36:43.233269 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 03:36:43.233278 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 03:36:43.233287 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 03:36:43.233295 | orchestrator | 2026-04-11 03:36:43.233303 | orchestrator | 2026-04-11 03:36:43.233311 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:36:43.233325 | orchestrator | Saturday 11 April 2026 03:36:43 +0000 (0:00:00.349) 0:02:48.648 ******** 2026-04-11 03:36:43.233333 | orchestrator | =============================================================================== 2026-04-11 03:36:43.233341 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 28.63s 2026-04-11 03:36:43.233349 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.56s 2026-04-11 03:36:43.233358 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.65s 2026-04-11 03:36:43.233371 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.56s 2026-04-11 03:36:43.233383 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.32s 2026-04-11 03:36:43.233394 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.43s 2026-04-11 03:36:43.233404 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.31s 2026-04-11 03:36:43.233416 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.64s 2026-04-11 03:36:43.233428 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.57s 2026-04-11 03:36:43.233440 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.26s 2026-04-11 03:36:43.233450 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.99s 2026-04-11 03:36:43.233461 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.59s 2026-04-11 03:36:43.233472 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.46s 2026-04-11 03:36:43.233483 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.30s 2026-04-11 03:36:43.233506 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.15s 2026-04-11 03:36:43.697010 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.06s 2026-04-11 03:36:43.697150 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.73s 2026-04-11 03:36:43.697167 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.20s 2026-04-11 03:36:43.697177 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.10s 2026-04-11 03:36:43.697221 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.09s 2026-04-11 03:36:46.370783 | orchestrator | 2026-04-11 03:36:46 | INFO  | Task 6f9f29d0-7f17-467d-bdc8-673092493da6 (barbican) was prepared for execution. 2026-04-11 03:36:46.370888 | orchestrator | 2026-04-11 03:36:46 | INFO  | It takes a moment until task 6f9f29d0-7f17-467d-bdc8-673092493da6 (barbican) has been started and output is visible here. 2026-04-11 03:37:31.006548 | orchestrator | 2026-04-11 03:37:31.006653 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:37:31.006669 | orchestrator | 2026-04-11 03:37:31.006679 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:37:31.006689 | orchestrator | Saturday 11 April 2026 03:36:51 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-04-11 03:37:31.006699 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:37:31.006709 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:37:31.006718 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:37:31.006727 | orchestrator | 2026-04-11 03:37:31.006736 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:37:31.006745 | orchestrator | Saturday 11 April 2026 03:36:51 +0000 (0:00:00.349) 0:00:00.646 ******** 2026-04-11 03:37:31.006753 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-11 03:37:31.006763 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-11 03:37:31.006771 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-11 03:37:31.006780 | orchestrator | 2026-04-11 03:37:31.006789 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-11 03:37:31.006798 | orchestrator | 2026-04-11 03:37:31.006807 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-11 03:37:31.006815 | orchestrator | Saturday 11 April 2026 03:36:51 +0000 (0:00:00.517) 0:00:01.164 ******** 2026-04-11 03:37:31.006825 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:37:31.006834 | orchestrator | 2026-04-11 03:37:31.006843 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-11 03:37:31.006852 | orchestrator | Saturday 11 April 2026 03:36:52 +0000 (0:00:00.607) 0:00:01.772 ******** 2026-04-11 03:37:31.006861 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-11 03:37:31.006870 | orchestrator | 2026-04-11 03:37:31.006879 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-11 03:37:31.006887 | orchestrator | Saturday 11 April 2026 03:36:56 +0000 (0:00:03.535) 0:00:05.307 ******** 2026-04-11 03:37:31.006896 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-11 03:37:31.006905 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-11 03:37:31.006914 | orchestrator | 2026-04-11 03:37:31.006927 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-11 03:37:31.006942 | orchestrator | Saturday 11 April 2026 03:37:02 +0000 (0:00:06.626) 0:00:11.933 ******** 2026-04-11 03:37:31.006958 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:37:31.006972 | orchestrator | 2026-04-11 03:37:31.006988 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-11 03:37:31.007003 | orchestrator | Saturday 11 April 2026 03:37:05 +0000 (0:00:03.248) 0:00:15.182 ******** 2026-04-11 03:37:31.007035 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:37:31.007051 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-11 03:37:31.007067 | orchestrator | 2026-04-11 03:37:31.007083 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-11 03:37:31.007096 | orchestrator | Saturday 11 April 2026 03:37:10 +0000 (0:00:04.130) 0:00:19.312 ******** 2026-04-11 03:37:31.007107 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:37:31.007142 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-11 03:37:31.007174 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-11 03:37:31.007185 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-11 03:37:31.007196 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-11 03:37:31.007206 | orchestrator | 2026-04-11 03:37:31.007216 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-11 03:37:31.007226 | orchestrator | Saturday 11 April 2026 03:37:25 +0000 (0:00:15.396) 0:00:34.709 ******** 2026-04-11 03:37:31.007237 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-11 03:37:31.007247 | orchestrator | 2026-04-11 03:37:31.007257 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-11 03:37:31.007268 | orchestrator | Saturday 11 April 2026 03:37:29 +0000 (0:00:03.788) 0:00:38.497 ******** 2026-04-11 03:37:31.007281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:31.007313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:31.007324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:31.007341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:31.007362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:31.007372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:31.007388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:37.160421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:37.160514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:37.160526 | orchestrator | 2026-04-11 03:37:37.160535 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-11 03:37:37.160542 | orchestrator | Saturday 11 April 2026 03:37:30 +0000 (0:00:01.787) 0:00:40.285 ******** 2026-04-11 03:37:37.160570 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-11 03:37:37.160607 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-11 03:37:37.160614 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-11 03:37:37.160622 | orchestrator | 2026-04-11 03:37:37.160629 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-11 03:37:37.160636 | orchestrator | Saturday 11 April 2026 03:37:32 +0000 (0:00:01.270) 0:00:41.555 ******** 2026-04-11 03:37:37.160643 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:37:37.160650 | orchestrator | 2026-04-11 03:37:37.160669 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-11 03:37:37.160675 | orchestrator | Saturday 11 April 2026 03:37:32 +0000 (0:00:00.356) 0:00:41.912 ******** 2026-04-11 03:37:37.160682 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:37:37.160701 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:37:37.160710 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:37:37.160729 | orchestrator | 2026-04-11 03:37:37.160737 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-11 03:37:37.160746 | orchestrator | Saturday 11 April 2026 03:37:32 +0000 (0:00:00.345) 0:00:42.258 ******** 2026-04-11 03:37:37.160755 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:37:37.160767 | orchestrator | 2026-04-11 03:37:37.160804 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-11 03:37:37.160816 | orchestrator | Saturday 11 April 2026 03:37:33 +0000 (0:00:00.622) 0:00:42.880 ******** 2026-04-11 03:37:37.160825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:37.160849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:37.160858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:37.160898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:37.160926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:37.160933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:37.160947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:37.160977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:38.638781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:38.638929 | orchestrator | 2026-04-11 03:37:38.638947 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-11 03:37:38.638960 | orchestrator | Saturday 11 April 2026 03:37:37 +0000 (0:00:03.555) 0:00:46.435 ******** 2026-04-11 03:37:38.638989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:38.639003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:38.639016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:38.639027 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:37:38.639040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:38.639072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:38.639093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:38.639104 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:37:38.639152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:38.639165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:38.639177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:38.639188 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:37:38.639200 | orchestrator | 2026-04-11 03:37:38.639211 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-11 03:37:38.639222 | orchestrator | Saturday 11 April 2026 03:37:37 +0000 (0:00:00.665) 0:00:47.100 ******** 2026-04-11 03:37:38.639244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:42.138840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:42.138946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:42.138959 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:37:42.138970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:42.138979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:42.138987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:42.139016 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:37:42.139040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:42.139053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:42.139062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:42.139070 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:37:42.139091 | orchestrator | 2026-04-11 03:37:42.139101 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-11 03:37:42.139111 | orchestrator | Saturday 11 April 2026 03:37:38 +0000 (0:00:00.824) 0:00:47.925 ******** 2026-04-11 03:37:42.139134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:42.139144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:42.139164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:52.518066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:52.518307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:52.519109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:52.519204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:52.519259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:52.519278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:52.519302 | orchestrator | 2026-04-11 03:37:52.519324 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-11 03:37:52.519341 | orchestrator | Saturday 11 April 2026 03:37:42 +0000 (0:00:03.495) 0:00:51.420 ******** 2026-04-11 03:37:52.519357 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:37:52.519374 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:37:52.519389 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:37:52.519406 | orchestrator | 2026-04-11 03:37:52.519449 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-11 03:37:52.519466 | orchestrator | Saturday 11 April 2026 03:37:43 +0000 (0:00:01.639) 0:00:53.060 ******** 2026-04-11 03:37:52.519482 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:37:52.519498 | orchestrator | 2026-04-11 03:37:52.519513 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-11 03:37:52.519528 | orchestrator | Saturday 11 April 2026 03:37:44 +0000 (0:00:01.053) 0:00:54.113 ******** 2026-04-11 03:37:52.519544 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:37:52.519560 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:37:52.519577 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:37:52.519593 | orchestrator | 2026-04-11 03:37:52.519610 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-11 03:37:52.519627 | orchestrator | Saturday 11 April 2026 03:37:45 +0000 (0:00:00.647) 0:00:54.761 ******** 2026-04-11 03:37:52.519701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:52.519738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:52.519752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:52.519810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:53.443565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:53.443643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:53.443665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:53.443671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:53.443676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:53.443680 | orchestrator | 2026-04-11 03:37:53.443686 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-11 03:37:53.443691 | orchestrator | Saturday 11 April 2026 03:37:52 +0000 (0:00:07.039) 0:01:01.800 ******** 2026-04-11 03:37:53.443706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:53.443714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:53.443719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:53.443730 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:37:53.443735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:53.443740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:53.443745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:53.443749 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:37:53.443760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-11 03:37:55.944193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:37:55.944360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:37:55.944392 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:37:55.944413 | orchestrator | 2026-04-11 03:37:55.944432 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-11 03:37:55.944452 | orchestrator | Saturday 11 April 2026 03:37:53 +0000 (0:00:00.921) 0:01:02.722 ******** 2026-04-11 03:37:55.944471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:55.944492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:55.944567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-11 03:37:55.944607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:55.944629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:55.944647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:55.944666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:55.944685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:55.944715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:37:55.944745 | orchestrator | 2026-04-11 03:37:55.944766 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-11 03:37:55.944789 | orchestrator | Saturday 11 April 2026 03:37:55 +0000 (0:00:02.497) 0:01:05.219 ******** 2026-04-11 03:38:47.109317 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:38:47.109420 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:38:47.109431 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:38:47.109438 | orchestrator | 2026-04-11 03:38:47.109446 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-11 03:38:47.109452 | orchestrator | Saturday 11 April 2026 03:37:56 +0000 (0:00:00.306) 0:01:05.526 ******** 2026-04-11 03:38:47.109456 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:38:47.109460 | orchestrator | 2026-04-11 03:38:47.109464 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-11 03:38:47.109469 | orchestrator | Saturday 11 April 2026 03:37:58 +0000 (0:00:02.110) 0:01:07.636 ******** 2026-04-11 03:38:47.109473 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:38:47.109477 | orchestrator | 2026-04-11 03:38:47.109481 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-11 03:38:47.109485 | orchestrator | Saturday 11 April 2026 03:38:00 +0000 (0:00:02.340) 0:01:09.977 ******** 2026-04-11 03:38:47.109489 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:38:47.109493 | orchestrator | 2026-04-11 03:38:47.109497 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-11 03:38:47.109501 | orchestrator | Saturday 11 April 2026 03:38:13 +0000 (0:00:12.368) 0:01:22.346 ******** 2026-04-11 03:38:47.109504 | orchestrator | 2026-04-11 03:38:47.109508 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-11 03:38:47.109512 | orchestrator | Saturday 11 April 2026 03:38:13 +0000 (0:00:00.077) 0:01:22.424 ******** 2026-04-11 03:38:47.109516 | orchestrator | 2026-04-11 03:38:47.109519 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-11 03:38:47.109523 | orchestrator | Saturday 11 April 2026 03:38:13 +0000 (0:00:00.073) 0:01:22.497 ******** 2026-04-11 03:38:47.109527 | orchestrator | 2026-04-11 03:38:47.109531 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-11 03:38:47.109535 | orchestrator | Saturday 11 April 2026 03:38:13 +0000 (0:00:00.076) 0:01:22.573 ******** 2026-04-11 03:38:47.109538 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:38:47.109542 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:38:47.109546 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:38:47.109550 | orchestrator | 2026-04-11 03:38:47.109554 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-11 03:38:47.109557 | orchestrator | Saturday 11 April 2026 03:38:25 +0000 (0:00:12.236) 0:01:34.810 ******** 2026-04-11 03:38:47.109561 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:38:47.109565 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:38:47.109569 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:38:47.109572 | orchestrator | 2026-04-11 03:38:47.109576 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-11 03:38:47.109580 | orchestrator | Saturday 11 April 2026 03:38:36 +0000 (0:00:10.496) 0:01:45.306 ******** 2026-04-11 03:38:47.109584 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:38:47.109587 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:38:47.109591 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:38:47.109595 | orchestrator | 2026-04-11 03:38:47.109599 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:38:47.109604 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 03:38:47.109609 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 03:38:47.109613 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 03:38:47.109636 | orchestrator | 2026-04-11 03:38:47.109640 | orchestrator | 2026-04-11 03:38:47.109644 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:38:47.109647 | orchestrator | Saturday 11 April 2026 03:38:46 +0000 (0:00:10.681) 0:01:55.988 ******** 2026-04-11 03:38:47.109651 | orchestrator | =============================================================================== 2026-04-11 03:38:47.109655 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.40s 2026-04-11 03:38:47.109659 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.37s 2026-04-11 03:38:47.109662 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.24s 2026-04-11 03:38:47.109666 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.68s 2026-04-11 03:38:47.109670 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.50s 2026-04-11 03:38:47.109674 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.04s 2026-04-11 03:38:47.109677 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.63s 2026-04-11 03:38:47.109681 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.13s 2026-04-11 03:38:47.109685 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.79s 2026-04-11 03:38:47.109689 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.56s 2026-04-11 03:38:47.109693 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.54s 2026-04-11 03:38:47.109697 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.50s 2026-04-11 03:38:47.109710 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.25s 2026-04-11 03:38:47.109714 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.50s 2026-04-11 03:38:47.109718 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.34s 2026-04-11 03:38:47.109732 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.11s 2026-04-11 03:38:47.109737 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.79s 2026-04-11 03:38:47.109740 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.64s 2026-04-11 03:38:47.109744 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.27s 2026-04-11 03:38:47.109748 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.05s 2026-04-11 03:38:49.792801 | orchestrator | 2026-04-11 03:38:49 | INFO  | Task e9442690-49bf-4e67-a463-7e17fc776517 (designate) was prepared for execution. 2026-04-11 03:38:49.792916 | orchestrator | 2026-04-11 03:38:49 | INFO  | It takes a moment until task e9442690-49bf-4e67-a463-7e17fc776517 (designate) has been started and output is visible here. 2026-04-11 03:39:21.784570 | orchestrator | 2026-04-11 03:39:21.784701 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:39:21.784723 | orchestrator | 2026-04-11 03:39:21.784737 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:39:21.784749 | orchestrator | Saturday 11 April 2026 03:38:54 +0000 (0:00:00.322) 0:00:00.322 ******** 2026-04-11 03:39:21.784761 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:39:21.784776 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:39:21.784789 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:39:21.784802 | orchestrator | 2026-04-11 03:39:21.784814 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:39:21.784826 | orchestrator | Saturday 11 April 2026 03:38:54 +0000 (0:00:00.357) 0:00:00.680 ******** 2026-04-11 03:39:21.784841 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-11 03:39:21.784854 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-11 03:39:21.784896 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-11 03:39:21.784909 | orchestrator | 2026-04-11 03:39:21.784922 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-11 03:39:21.784936 | orchestrator | 2026-04-11 03:39:21.784948 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-11 03:39:21.784961 | orchestrator | Saturday 11 April 2026 03:38:55 +0000 (0:00:00.543) 0:00:01.224 ******** 2026-04-11 03:39:21.784976 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:39:21.784990 | orchestrator | 2026-04-11 03:39:21.785003 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-11 03:39:21.785015 | orchestrator | Saturday 11 April 2026 03:38:55 +0000 (0:00:00.608) 0:00:01.833 ******** 2026-04-11 03:39:21.785027 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-11 03:39:21.785039 | orchestrator | 2026-04-11 03:39:21.785051 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-11 03:39:21.785064 | orchestrator | Saturday 11 April 2026 03:38:59 +0000 (0:00:03.399) 0:00:05.232 ******** 2026-04-11 03:39:21.785077 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-11 03:39:21.785090 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-11 03:39:21.785103 | orchestrator | 2026-04-11 03:39:21.785188 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-11 03:39:21.785203 | orchestrator | Saturday 11 April 2026 03:39:05 +0000 (0:00:06.334) 0:00:11.567 ******** 2026-04-11 03:39:21.785217 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:39:21.785230 | orchestrator | 2026-04-11 03:39:21.785244 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-11 03:39:21.785257 | orchestrator | Saturday 11 April 2026 03:39:08 +0000 (0:00:03.304) 0:00:14.872 ******** 2026-04-11 03:39:21.785270 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:39:21.785285 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-11 03:39:21.785298 | orchestrator | 2026-04-11 03:39:21.785311 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-11 03:39:21.785325 | orchestrator | Saturday 11 April 2026 03:39:12 +0000 (0:00:03.979) 0:00:18.851 ******** 2026-04-11 03:39:21.785340 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:39:21.785354 | orchestrator | 2026-04-11 03:39:21.785368 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-11 03:39:21.785381 | orchestrator | Saturday 11 April 2026 03:39:15 +0000 (0:00:03.113) 0:00:21.965 ******** 2026-04-11 03:39:21.785394 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-11 03:39:21.785407 | orchestrator | 2026-04-11 03:39:21.785421 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-11 03:39:21.785434 | orchestrator | Saturday 11 April 2026 03:39:19 +0000 (0:00:03.688) 0:00:25.653 ******** 2026-04-11 03:39:21.785471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:21.785527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:21.785543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:21.785556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:21.785570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:21.785588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:21.785601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:21.785631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:28.173621 | orchestrator | 2026-04-11 03:39:28.173636 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-11 03:39:28.173650 | orchestrator | Saturday 11 April 2026 03:39:22 +0000 (0:00:02.848) 0:00:28.501 ******** 2026-04-11 03:39:28.173662 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:39:28.173675 | orchestrator | 2026-04-11 03:39:28.173689 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-11 03:39:28.173701 | orchestrator | Saturday 11 April 2026 03:39:22 +0000 (0:00:00.139) 0:00:28.641 ******** 2026-04-11 03:39:28.173713 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:39:28.173725 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:39:28.173737 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:39:28.173758 | orchestrator | 2026-04-11 03:39:28.173770 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-11 03:39:28.173782 | orchestrator | Saturday 11 April 2026 03:39:23 +0000 (0:00:00.575) 0:00:29.216 ******** 2026-04-11 03:39:28.173800 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:39:28.173813 | orchestrator | 2026-04-11 03:39:28.173826 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-11 03:39:28.173838 | orchestrator | Saturday 11 April 2026 03:39:23 +0000 (0:00:00.618) 0:00:29.835 ******** 2026-04-11 03:39:28.173852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:28.173875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:29.967779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:29.967861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:29.967995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:29.968009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:30.935398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:30.935508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:30.935557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:30.935573 | orchestrator | 2026-04-11 03:39:30.935587 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-11 03:39:30.935616 | orchestrator | Saturday 11 April 2026 03:39:29 +0000 (0:00:06.097) 0:00:35.932 ******** 2026-04-11 03:39:30.935632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:39:30.935646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:39:30.935677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:39:30.935690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:39:30.935700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:39:30.935721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:39:30.935732 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:39:30.935750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:39:30.935763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:39:30.935773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:39:30.935791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:39:31.927984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:39:31.928243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:39:31.928282 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:39:31.928366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:39:31.928382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:39:31.928395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:39:31.928406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:39:31.928481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:39:31.928504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:39:31.928521 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:39:31.928539 | orchestrator | 2026-04-11 03:39:31.928556 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-11 03:39:31.928575 | orchestrator | Saturday 11 April 2026 03:39:31 +0000 (0:00:01.093) 0:00:37.025 ******** 2026-04-11 03:39:31.928594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:39:31.928605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:39:31.928616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:39:31.928634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312341 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:39:32.312386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:39:32.312406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:39:32.312424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312552 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:39:32.312577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:39:32.312596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:39:32.312613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:39:32.312665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:39:36.846279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:39:36.846408 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:39:36.846424 | orchestrator | 2026-04-11 03:39:36.846434 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-11 03:39:36.846444 | orchestrator | Saturday 11 April 2026 03:39:32 +0000 (0:00:01.246) 0:00:38.272 ******** 2026-04-11 03:39:36.846466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:36.846473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:36.846494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:36.846513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:36.846520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:36.846528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:36.846534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:36.846540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:36.846552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:36.846559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:36.846571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939858 | orchestrator | 2026-04-11 03:39:48.939866 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-11 03:39:48.939876 | orchestrator | Saturday 11 April 2026 03:39:38 +0000 (0:00:06.280) 0:00:44.552 ******** 2026-04-11 03:39:48.939895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:48.939908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:48.939928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:39:48.939940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:48.939961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:57.753035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:39:57.754340 | orchestrator | 2026-04-11 03:39:57.754349 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-11 03:39:57.754359 | orchestrator | Saturday 11 April 2026 03:39:53 +0000 (0:00:15.153) 0:00:59.706 ******** 2026-04-11 03:39:57.754376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-11 03:40:02.345795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-11 03:40:02.345899 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-11 03:40:02.345910 | orchestrator | 2026-04-11 03:40:02.345919 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-11 03:40:02.345941 | orchestrator | Saturday 11 April 2026 03:39:57 +0000 (0:00:04.010) 0:01:03.717 ******** 2026-04-11 03:40:02.345948 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-11 03:40:02.345955 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-11 03:40:02.345982 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-11 03:40:02.345989 | orchestrator | 2026-04-11 03:40:02.345996 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-11 03:40:02.346006 | orchestrator | Saturday 11 April 2026 03:40:00 +0000 (0:00:02.698) 0:01:06.416 ******** 2026-04-11 03:40:02.346096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:02.346137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:02.346153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:02.346185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:02.346208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:02.346229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:02.346243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:02.346251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:02.346258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:02.346265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:02.346279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:05.279723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:05.279822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:05.279834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:05.279842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:05.279849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:05.279858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:05.279887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:05.279913 | orchestrator | 2026-04-11 03:40:05.279921 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-11 03:40:05.279929 | orchestrator | Saturday 11 April 2026 03:40:03 +0000 (0:00:02.973) 0:01:09.390 ******** 2026-04-11 03:40:05.279937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:05.279945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:05.279953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:05.279959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:05.279981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.232949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.233049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.233064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:06.233072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.233080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.233087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.233183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:06.233195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.233201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.233208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:06.233215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:06.233222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:06.233241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:06.233250 | orchestrator | 2026-04-11 03:40:06.233269 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-11 03:40:06.233284 | orchestrator | Saturday 11 April 2026 03:40:06 +0000 (0:00:02.806) 0:01:12.196 ******** 2026-04-11 03:40:07.373516 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:40:07.373597 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:40:07.373604 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:40:07.373608 | orchestrator | 2026-04-11 03:40:07.373614 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-11 03:40:07.373619 | orchestrator | Saturday 11 April 2026 03:40:06 +0000 (0:00:00.330) 0:01:12.526 ******** 2026-04-11 03:40:07.373626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:07.373635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:40:07.373640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:07.373646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:07.373670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:07.373695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:40:07.373700 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:40:07.373704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:07.373708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:40:07.373712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:07.373720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:07.373724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:07.373734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:40:10.744529 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:40:10.744665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-11 03:40:10.744692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 03:40:10.744711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 03:40:10.744775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 03:40:10.744795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 03:40:10.744828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:40:10.744846 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:40:10.744863 | orchestrator | 2026-04-11 03:40:10.744904 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-11 03:40:10.744924 | orchestrator | Saturday 11 April 2026 03:40:07 +0000 (0:00:00.931) 0:01:13.458 ******** 2026-04-11 03:40:10.744941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:40:10.744960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:40:10.744988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-11 03:40:10.745005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:10.745038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:40:12.750958 | orchestrator | 2026-04-11 03:40:12.750971 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-11 03:40:12.750984 | orchestrator | Saturday 11 April 2026 03:40:12 +0000 (0:00:04.883) 0:01:18.341 ******** 2026-04-11 03:40:12.750995 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:40:12.751024 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:41:37.704893 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:41:37.705003 | orchestrator | 2026-04-11 03:41:37.705025 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-11 03:41:37.705043 | orchestrator | Saturday 11 April 2026 03:40:12 +0000 (0:00:00.371) 0:01:18.713 ******** 2026-04-11 03:41:37.705060 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-11 03:41:37.705076 | orchestrator | 2026-04-11 03:41:37.705091 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-11 03:41:37.705107 | orchestrator | Saturday 11 April 2026 03:40:14 +0000 (0:00:02.202) 0:01:20.916 ******** 2026-04-11 03:41:37.705184 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-11 03:41:37.705201 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-11 03:41:37.705213 | orchestrator | 2026-04-11 03:41:37.705226 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-11 03:41:37.705241 | orchestrator | Saturday 11 April 2026 03:40:17 +0000 (0:00:02.265) 0:01:23.181 ******** 2026-04-11 03:41:37.705256 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:41:37.705299 | orchestrator | 2026-04-11 03:41:37.705315 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-11 03:41:37.705330 | orchestrator | Saturday 11 April 2026 03:40:33 +0000 (0:00:16.428) 0:01:39.609 ******** 2026-04-11 03:41:37.705345 | orchestrator | 2026-04-11 03:41:37.705360 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-11 03:41:37.705374 | orchestrator | Saturday 11 April 2026 03:40:33 +0000 (0:00:00.075) 0:01:39.685 ******** 2026-04-11 03:41:37.705389 | orchestrator | 2026-04-11 03:41:37.705404 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-11 03:41:37.705419 | orchestrator | Saturday 11 April 2026 03:40:33 +0000 (0:00:00.075) 0:01:39.760 ******** 2026-04-11 03:41:37.705434 | orchestrator | 2026-04-11 03:41:37.705450 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-11 03:41:37.705466 | orchestrator | Saturday 11 April 2026 03:40:33 +0000 (0:00:00.074) 0:01:39.835 ******** 2026-04-11 03:41:37.705481 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:41:37.705498 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:41:37.705514 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:41:37.705530 | orchestrator | 2026-04-11 03:41:37.705545 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-11 03:41:37.705561 | orchestrator | Saturday 11 April 2026 03:40:42 +0000 (0:00:08.795) 0:01:48.630 ******** 2026-04-11 03:41:37.705576 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:41:37.705590 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:41:37.705605 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:41:37.705619 | orchestrator | 2026-04-11 03:41:37.705634 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-11 03:41:37.705653 | orchestrator | Saturday 11 April 2026 03:40:53 +0000 (0:00:11.263) 0:01:59.893 ******** 2026-04-11 03:41:37.705670 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:41:37.705685 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:41:37.705699 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:41:37.705715 | orchestrator | 2026-04-11 03:41:37.705731 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-11 03:41:37.705746 | orchestrator | Saturday 11 April 2026 03:41:05 +0000 (0:00:11.319) 0:02:11.213 ******** 2026-04-11 03:41:37.705761 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:41:37.705777 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:41:37.705791 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:41:37.705806 | orchestrator | 2026-04-11 03:41:37.705821 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-11 03:41:37.705837 | orchestrator | Saturday 11 April 2026 03:41:11 +0000 (0:00:06.409) 0:02:17.622 ******** 2026-04-11 03:41:37.705852 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:41:37.705866 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:41:37.705881 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:41:37.705896 | orchestrator | 2026-04-11 03:41:37.705910 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-11 03:41:37.705925 | orchestrator | Saturday 11 April 2026 03:41:23 +0000 (0:00:11.602) 0:02:29.225 ******** 2026-04-11 03:41:37.705942 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:41:37.705957 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:41:37.705971 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:41:37.705986 | orchestrator | 2026-04-11 03:41:37.706000 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-11 03:41:37.706078 | orchestrator | Saturday 11 April 2026 03:41:29 +0000 (0:00:06.452) 0:02:35.677 ******** 2026-04-11 03:41:37.706094 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:41:37.706109 | orchestrator | 2026-04-11 03:41:37.706147 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:41:37.706164 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 03:41:37.706197 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 03:41:37.706212 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 03:41:37.706227 | orchestrator | 2026-04-11 03:41:37.706242 | orchestrator | 2026-04-11 03:41:37.706258 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:41:37.706274 | orchestrator | Saturday 11 April 2026 03:41:37 +0000 (0:00:07.521) 0:02:43.199 ******** 2026-04-11 03:41:37.706306 | orchestrator | =============================================================================== 2026-04-11 03:41:37.706322 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.43s 2026-04-11 03:41:37.706338 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.15s 2026-04-11 03:41:37.706379 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.60s 2026-04-11 03:41:37.706396 | orchestrator | designate : Restart designate-central container ------------------------ 11.32s 2026-04-11 03:41:37.706411 | orchestrator | designate : Restart designate-api container ---------------------------- 11.26s 2026-04-11 03:41:37.706426 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.80s 2026-04-11 03:41:37.706442 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.52s 2026-04-11 03:41:37.706458 | orchestrator | designate : Restart designate-worker container -------------------------- 6.45s 2026-04-11 03:41:37.706474 | orchestrator | designate : Restart designate-producer container ------------------------ 6.41s 2026-04-11 03:41:37.706490 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.33s 2026-04-11 03:41:37.706505 | orchestrator | designate : Copying over config.json files for services ----------------- 6.28s 2026-04-11 03:41:37.706520 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.10s 2026-04-11 03:41:37.706536 | orchestrator | designate : Check designate containers ---------------------------------- 4.88s 2026-04-11 03:41:37.706552 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.01s 2026-04-11 03:41:37.706568 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.98s 2026-04-11 03:41:37.706584 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.69s 2026-04-11 03:41:37.706598 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.40s 2026-04-11 03:41:37.706612 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.30s 2026-04-11 03:41:37.706628 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.11s 2026-04-11 03:41:37.706644 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.97s 2026-04-11 03:41:40.277009 | orchestrator | 2026-04-11 03:41:40 | INFO  | Task 77e14c7e-915f-4330-94ec-2a457d961172 (octavia) was prepared for execution. 2026-04-11 03:41:40.277106 | orchestrator | 2026-04-11 03:41:40 | INFO  | It takes a moment until task 77e14c7e-915f-4330-94ec-2a457d961172 (octavia) has been started and output is visible here. 2026-04-11 03:43:47.733525 | orchestrator | 2026-04-11 03:43:47.733640 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:43:47.733655 | orchestrator | 2026-04-11 03:43:47.733664 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:43:47.733674 | orchestrator | Saturday 11 April 2026 03:41:44 +0000 (0:00:00.306) 0:00:00.306 ******** 2026-04-11 03:43:47.733682 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:43:47.733691 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:43:47.733699 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:43:47.733709 | orchestrator | 2026-04-11 03:43:47.733722 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:43:47.733736 | orchestrator | Saturday 11 April 2026 03:41:45 +0000 (0:00:00.364) 0:00:00.670 ******** 2026-04-11 03:43:47.733781 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-11 03:43:47.733798 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-11 03:43:47.733812 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-11 03:43:47.733821 | orchestrator | 2026-04-11 03:43:47.733828 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-11 03:43:47.733837 | orchestrator | 2026-04-11 03:43:47.733845 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 03:43:47.733853 | orchestrator | Saturday 11 April 2026 03:41:45 +0000 (0:00:00.481) 0:00:01.152 ******** 2026-04-11 03:43:47.733861 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:43:47.733870 | orchestrator | 2026-04-11 03:43:47.733878 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-11 03:43:47.733886 | orchestrator | Saturday 11 April 2026 03:41:46 +0000 (0:00:00.640) 0:00:01.792 ******** 2026-04-11 03:43:47.733894 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-11 03:43:47.733902 | orchestrator | 2026-04-11 03:43:47.733909 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-11 03:43:47.733917 | orchestrator | Saturday 11 April 2026 03:41:49 +0000 (0:00:03.322) 0:00:05.115 ******** 2026-04-11 03:43:47.733925 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-11 03:43:47.733933 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-11 03:43:47.733941 | orchestrator | 2026-04-11 03:43:47.733949 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-11 03:43:47.733957 | orchestrator | Saturday 11 April 2026 03:41:56 +0000 (0:00:06.345) 0:00:11.460 ******** 2026-04-11 03:43:47.733965 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:43:47.733972 | orchestrator | 2026-04-11 03:43:47.733980 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-11 03:43:47.733988 | orchestrator | Saturday 11 April 2026 03:41:59 +0000 (0:00:03.162) 0:00:14.622 ******** 2026-04-11 03:43:47.733996 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:43:47.734071 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-11 03:43:47.734084 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-11 03:43:47.734094 | orchestrator | 2026-04-11 03:43:47.734103 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-11 03:43:47.734112 | orchestrator | Saturday 11 April 2026 03:42:07 +0000 (0:00:08.248) 0:00:22.871 ******** 2026-04-11 03:43:47.734121 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:43:47.734151 | orchestrator | 2026-04-11 03:43:47.734161 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-11 03:43:47.734171 | orchestrator | Saturday 11 April 2026 03:42:10 +0000 (0:00:03.193) 0:00:26.065 ******** 2026-04-11 03:43:47.734180 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-11 03:43:47.734189 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-11 03:43:47.734197 | orchestrator | 2026-04-11 03:43:47.734217 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-11 03:43:47.734226 | orchestrator | Saturday 11 April 2026 03:42:18 +0000 (0:00:07.351) 0:00:33.416 ******** 2026-04-11 03:43:47.734235 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-11 03:43:47.734245 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-11 03:43:47.734254 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-11 03:43:47.734263 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-11 03:43:47.734272 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-11 03:43:47.734289 | orchestrator | 2026-04-11 03:43:47.734298 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 03:43:47.734307 | orchestrator | Saturday 11 April 2026 03:42:33 +0000 (0:00:15.472) 0:00:48.889 ******** 2026-04-11 03:43:47.734316 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:43:47.734325 | orchestrator | 2026-04-11 03:43:47.734335 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-11 03:43:47.734343 | orchestrator | Saturday 11 April 2026 03:42:34 +0000 (0:00:00.886) 0:00:49.776 ******** 2026-04-11 03:43:47.734354 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.734363 | orchestrator | 2026-04-11 03:43:47.734372 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-11 03:43:47.734382 | orchestrator | Saturday 11 April 2026 03:42:39 +0000 (0:00:05.005) 0:00:54.781 ******** 2026-04-11 03:43:47.734391 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.734400 | orchestrator | 2026-04-11 03:43:47.734408 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-11 03:43:47.734434 | orchestrator | Saturday 11 April 2026 03:42:43 +0000 (0:00:03.929) 0:00:58.710 ******** 2026-04-11 03:43:47.734443 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:43:47.734452 | orchestrator | 2026-04-11 03:43:47.734462 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-11 03:43:47.734470 | orchestrator | Saturday 11 April 2026 03:42:46 +0000 (0:00:03.125) 0:01:01.836 ******** 2026-04-11 03:43:47.734478 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-11 03:43:47.734486 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-11 03:43:47.734494 | orchestrator | 2026-04-11 03:43:47.734501 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-11 03:43:47.734509 | orchestrator | Saturday 11 April 2026 03:42:57 +0000 (0:00:10.975) 0:01:12.811 ******** 2026-04-11 03:43:47.734517 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-11 03:43:47.734526 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-11 03:43:47.734535 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-11 03:43:47.734548 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-11 03:43:47.734556 | orchestrator | 2026-04-11 03:43:47.734564 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-11 03:43:47.734572 | orchestrator | Saturday 11 April 2026 03:43:12 +0000 (0:00:15.464) 0:01:28.275 ******** 2026-04-11 03:43:47.734580 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.734588 | orchestrator | 2026-04-11 03:43:47.734596 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-11 03:43:47.734604 | orchestrator | Saturday 11 April 2026 03:43:17 +0000 (0:00:04.631) 0:01:32.907 ******** 2026-04-11 03:43:47.734612 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.734620 | orchestrator | 2026-04-11 03:43:47.734628 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-11 03:43:47.734636 | orchestrator | Saturday 11 April 2026 03:43:22 +0000 (0:00:05.253) 0:01:38.160 ******** 2026-04-11 03:43:47.734644 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:43:47.734652 | orchestrator | 2026-04-11 03:43:47.734660 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-11 03:43:47.734668 | orchestrator | Saturday 11 April 2026 03:43:22 +0000 (0:00:00.228) 0:01:38.389 ******** 2026-04-11 03:43:47.734676 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:43:47.734684 | orchestrator | 2026-04-11 03:43:47.734692 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 03:43:47.734705 | orchestrator | Saturday 11 April 2026 03:43:27 +0000 (0:00:04.643) 0:01:43.032 ******** 2026-04-11 03:43:47.734719 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:43:47.734727 | orchestrator | 2026-04-11 03:43:47.734735 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-11 03:43:47.734743 | orchestrator | Saturday 11 April 2026 03:43:28 +0000 (0:00:01.195) 0:01:44.228 ******** 2026-04-11 03:43:47.734751 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.734762 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:43:47.734776 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:43:47.734791 | orchestrator | 2026-04-11 03:43:47.734805 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-11 03:43:47.734820 | orchestrator | Saturday 11 April 2026 03:43:34 +0000 (0:00:05.649) 0:01:49.877 ******** 2026-04-11 03:43:47.734835 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:43:47.734849 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.734863 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:43:47.734880 | orchestrator | 2026-04-11 03:43:47.734888 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-11 03:43:47.734896 | orchestrator | Saturday 11 April 2026 03:43:40 +0000 (0:00:05.576) 0:01:55.454 ******** 2026-04-11 03:43:47.734904 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.734912 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:43:47.734920 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:43:47.734928 | orchestrator | 2026-04-11 03:43:47.734936 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-11 03:43:47.734944 | orchestrator | Saturday 11 April 2026 03:43:41 +0000 (0:00:01.092) 0:01:56.546 ******** 2026-04-11 03:43:47.734951 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:43:47.734959 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:43:47.734967 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:43:47.734975 | orchestrator | 2026-04-11 03:43:47.734983 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-11 03:43:47.734991 | orchestrator | Saturday 11 April 2026 03:43:42 +0000 (0:00:01.810) 0:01:58.357 ******** 2026-04-11 03:43:47.734999 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:43:47.735007 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.735015 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:43:47.735023 | orchestrator | 2026-04-11 03:43:47.735031 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-11 03:43:47.735039 | orchestrator | Saturday 11 April 2026 03:43:44 +0000 (0:00:01.348) 0:01:59.705 ******** 2026-04-11 03:43:47.735047 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.735054 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:43:47.735062 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:43:47.735070 | orchestrator | 2026-04-11 03:43:47.735078 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-11 03:43:47.735086 | orchestrator | Saturday 11 April 2026 03:43:45 +0000 (0:00:01.213) 0:02:00.919 ******** 2026-04-11 03:43:47.735094 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:43:47.735102 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:43:47.735110 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:43:47.735118 | orchestrator | 2026-04-11 03:43:47.735186 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-11 03:44:14.346874 | orchestrator | Saturday 11 April 2026 03:43:47 +0000 (0:00:02.207) 0:02:03.126 ******** 2026-04-11 03:44:14.347791 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:44:14.347831 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:44:14.347843 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:44:14.347854 | orchestrator | 2026-04-11 03:44:14.347866 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-11 03:44:14.347877 | orchestrator | Saturday 11 April 2026 03:43:49 +0000 (0:00:01.502) 0:02:04.629 ******** 2026-04-11 03:44:14.347912 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:44:14.347921 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:44:14.347928 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:44:14.347934 | orchestrator | 2026-04-11 03:44:14.347941 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-11 03:44:14.347948 | orchestrator | Saturday 11 April 2026 03:43:49 +0000 (0:00:00.697) 0:02:05.327 ******** 2026-04-11 03:44:14.347955 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:44:14.347961 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:44:14.347968 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:44:14.347974 | orchestrator | 2026-04-11 03:44:14.347982 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 03:44:14.347988 | orchestrator | Saturday 11 April 2026 03:43:54 +0000 (0:00:04.138) 0:02:09.465 ******** 2026-04-11 03:44:14.347996 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:44:14.348003 | orchestrator | 2026-04-11 03:44:14.348010 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-11 03:44:14.348017 | orchestrator | Saturday 11 April 2026 03:43:54 +0000 (0:00:00.599) 0:02:10.065 ******** 2026-04-11 03:44:14.348023 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:44:14.348030 | orchestrator | 2026-04-11 03:44:14.348036 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-11 03:44:14.348043 | orchestrator | Saturday 11 April 2026 03:43:58 +0000 (0:00:03.563) 0:02:13.628 ******** 2026-04-11 03:44:14.348050 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:44:14.348056 | orchestrator | 2026-04-11 03:44:14.348063 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-11 03:44:14.348070 | orchestrator | Saturday 11 April 2026 03:44:01 +0000 (0:00:03.177) 0:02:16.806 ******** 2026-04-11 03:44:14.348077 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-11 03:44:14.348084 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-11 03:44:14.348091 | orchestrator | 2026-04-11 03:44:14.348099 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-11 03:44:14.348109 | orchestrator | Saturday 11 April 2026 03:44:08 +0000 (0:00:06.840) 0:02:23.647 ******** 2026-04-11 03:44:14.348119 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:44:14.348173 | orchestrator | 2026-04-11 03:44:14.348187 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-11 03:44:14.348198 | orchestrator | Saturday 11 April 2026 03:44:11 +0000 (0:00:03.488) 0:02:27.135 ******** 2026-04-11 03:44:14.348207 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:44:14.348231 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:44:14.348240 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:44:14.348249 | orchestrator | 2026-04-11 03:44:14.348259 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-11 03:44:14.348269 | orchestrator | Saturday 11 April 2026 03:44:12 +0000 (0:00:00.588) 0:02:27.724 ******** 2026-04-11 03:44:14.348285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:14.348330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:14.348340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:14.348347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:14.348359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:14.348366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:14.348374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:14.348386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:14.348398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:15.915678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:15.915754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:15.915777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:15.915787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:15.915819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:15.915827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:15.915835 | orchestrator | 2026-04-11 03:44:15.915844 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-11 03:44:15.915854 | orchestrator | Saturday 11 April 2026 03:44:14 +0000 (0:00:02.469) 0:02:30.193 ******** 2026-04-11 03:44:15.915861 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:44:15.915869 | orchestrator | 2026-04-11 03:44:15.915877 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-11 03:44:15.915883 | orchestrator | Saturday 11 April 2026 03:44:14 +0000 (0:00:00.147) 0:02:30.341 ******** 2026-04-11 03:44:15.915890 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:44:15.915909 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:44:15.915914 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:44:15.915918 | orchestrator | 2026-04-11 03:44:15.915923 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-11 03:44:15.915928 | orchestrator | Saturday 11 April 2026 03:44:15 +0000 (0:00:00.349) 0:02:30.691 ******** 2026-04-11 03:44:15.915934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:15.915944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:15.915950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:15.915960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:15.915965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:15.915974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:20.906860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:20.906936 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:44:20.906945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:20.906962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:20.906980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:20.906985 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:44:20.906990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:20.906995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:20.907010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:20.907014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:20.907021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:20.907029 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:44:20.907033 | orchestrator | 2026-04-11 03:44:20.907038 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 03:44:20.907043 | orchestrator | Saturday 11 April 2026 03:44:16 +0000 (0:00:00.726) 0:02:31.417 ******** 2026-04-11 03:44:20.907048 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:44:20.907051 | orchestrator | 2026-04-11 03:44:20.907055 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-11 03:44:20.907059 | orchestrator | Saturday 11 April 2026 03:44:16 +0000 (0:00:00.778) 0:02:32.195 ******** 2026-04-11 03:44:20.907063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:20.907068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:20.907078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:22.477777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:22.477868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:22.477878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:22.477887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:22.477896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:22.477904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:22.477927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:22.477960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:22.477969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:22.477977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:22.477985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:22.477993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:22.478000 | orchestrator | 2026-04-11 03:44:22.478010 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-11 03:44:22.478072 | orchestrator | Saturday 11 April 2026 03:44:21 +0000 (0:00:05.052) 0:02:37.248 ******** 2026-04-11 03:44:22.478089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:22.624415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:22.624552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:22.624579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:22.624599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:22.624619 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:44:22.624640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:22.624690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:22.624750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:22.624772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:22.624791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:22.624813 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:44:22.624834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:22.624856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:22.624890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:22.624933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:23.491439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:23.491546 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:44:23.491564 | orchestrator | 2026-04-11 03:44:23.491579 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-11 03:44:23.491592 | orchestrator | Saturday 11 April 2026 03:44:22 +0000 (0:00:00.776) 0:02:38.024 ******** 2026-04-11 03:44:23.491606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:23.491619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:23.491631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:23.491672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:23.491719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:23.491732 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:44:23.491743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:23.491754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:23.491767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:23.491778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:23.491799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:23.491810 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:44:23.491835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 03:44:28.207599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 03:44:28.207696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 03:44:28.207708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 03:44:28.207737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 03:44:28.207746 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:44:28.207754 | orchestrator | 2026-04-11 03:44:28.207762 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-11 03:44:28.207771 | orchestrator | Saturday 11 April 2026 03:44:24 +0000 (0:00:01.421) 0:02:39.446 ******** 2026-04-11 03:44:28.207779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:28.207810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:28.207815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:28.207819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:28.207828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:28.207832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:44:28.207836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:28.207846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:45.743670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:45.743819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:45.743883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:45.743907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:44:45.743926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:45.743962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:45.743995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:44:45.744008 | orchestrator | 2026-04-11 03:44:45.744022 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-11 03:44:45.744035 | orchestrator | Saturday 11 April 2026 03:44:29 +0000 (0:00:05.116) 0:02:44.563 ******** 2026-04-11 03:44:45.744046 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-11 03:44:45.744058 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-11 03:44:45.744068 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-11 03:44:45.744079 | orchestrator | 2026-04-11 03:44:45.744090 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-11 03:44:45.744101 | orchestrator | Saturday 11 April 2026 03:44:30 +0000 (0:00:01.670) 0:02:46.233 ******** 2026-04-11 03:44:45.744123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:45.744169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:45.744200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:44:45.744243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:45:01.730282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:45:01.730417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:45:01.730501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:45:01.730757 | orchestrator | 2026-04-11 03:45:01.730776 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-11 03:45:01.730790 | orchestrator | Saturday 11 April 2026 03:44:49 +0000 (0:00:18.475) 0:03:04.709 ******** 2026-04-11 03:45:01.730801 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:45:01.730814 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:45:01.730825 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:45:01.730836 | orchestrator | 2026-04-11 03:45:01.730847 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-11 03:45:01.730858 | orchestrator | Saturday 11 April 2026 03:44:51 +0000 (0:00:01.856) 0:03:06.566 ******** 2026-04-11 03:45:01.730869 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-11 03:45:01.730880 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-11 03:45:01.730891 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-11 03:45:01.730902 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-11 03:45:01.730913 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-11 03:45:01.730924 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-11 03:45:01.730935 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-11 03:45:01.730958 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-11 03:45:01.730984 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-11 03:45:01.731006 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-11 03:45:01.731023 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-11 03:45:01.731041 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-11 03:45:01.731071 | orchestrator | 2026-04-11 03:45:01.731089 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-11 03:45:01.731105 | orchestrator | Saturday 11 April 2026 03:44:56 +0000 (0:00:05.151) 0:03:11.717 ******** 2026-04-11 03:45:01.731123 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-11 03:45:01.731222 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-11 03:45:01.731252 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-11 03:45:10.281691 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-11 03:45:10.281782 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-11 03:45:10.281791 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-11 03:45:10.281797 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-11 03:45:10.281803 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-11 03:45:10.281809 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-11 03:45:10.281817 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-11 03:45:10.281826 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-11 03:45:10.281838 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-11 03:45:10.281850 | orchestrator | 2026-04-11 03:45:10.281860 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-11 03:45:10.281871 | orchestrator | Saturday 11 April 2026 03:45:01 +0000 (0:00:05.405) 0:03:17.122 ******** 2026-04-11 03:45:10.281880 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-11 03:45:10.281888 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-11 03:45:10.281897 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-11 03:45:10.281906 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-11 03:45:10.281915 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-11 03:45:10.281925 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-11 03:45:10.281935 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-11 03:45:10.281944 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-11 03:45:10.281954 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-11 03:45:10.281962 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-11 03:45:10.281968 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-11 03:45:10.281974 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-11 03:45:10.281979 | orchestrator | 2026-04-11 03:45:10.281985 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-11 03:45:10.281991 | orchestrator | Saturday 11 April 2026 03:45:07 +0000 (0:00:05.369) 0:03:22.492 ******** 2026-04-11 03:45:10.282001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:45:10.282070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:45:10.282118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 03:45:10.282126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:45:10.282133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:45:10.282217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 03:45:10.282226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:10.282239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:10.282249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 03:45:10.282261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:46:39.274532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:46:39.274643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 03:46:39.274661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:46:39.274680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:46:39.274744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 03:46:39.274763 | orchestrator | 2026-04-11 03:46:39.274782 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 03:46:39.274798 | orchestrator | Saturday 11 April 2026 03:45:11 +0000 (0:00:04.066) 0:03:26.558 ******** 2026-04-11 03:46:39.274814 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:46:39.274831 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:46:39.274846 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:46:39.274861 | orchestrator | 2026-04-11 03:46:39.274876 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-11 03:46:39.274893 | orchestrator | Saturday 11 April 2026 03:45:11 +0000 (0:00:00.343) 0:03:26.901 ******** 2026-04-11 03:46:39.274908 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.274924 | orchestrator | 2026-04-11 03:46:39.274939 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-11 03:46:39.274956 | orchestrator | Saturday 11 April 2026 03:45:13 +0000 (0:00:02.183) 0:03:29.085 ******** 2026-04-11 03:46:39.274969 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.274983 | orchestrator | 2026-04-11 03:46:39.275000 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-11 03:46:39.275017 | orchestrator | Saturday 11 April 2026 03:45:15 +0000 (0:00:02.207) 0:03:31.292 ******** 2026-04-11 03:46:39.275034 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.275049 | orchestrator | 2026-04-11 03:46:39.275064 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-11 03:46:39.275081 | orchestrator | Saturday 11 April 2026 03:45:18 +0000 (0:00:02.230) 0:03:33.522 ******** 2026-04-11 03:46:39.275118 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.275134 | orchestrator | 2026-04-11 03:46:39.275144 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-11 03:46:39.275192 | orchestrator | Saturday 11 April 2026 03:45:20 +0000 (0:00:02.268) 0:03:35.791 ******** 2026-04-11 03:46:39.275203 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.275212 | orchestrator | 2026-04-11 03:46:39.275222 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-11 03:46:39.275231 | orchestrator | Saturday 11 April 2026 03:45:43 +0000 (0:00:22.846) 0:03:58.637 ******** 2026-04-11 03:46:39.275241 | orchestrator | 2026-04-11 03:46:39.275250 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-11 03:46:39.275260 | orchestrator | Saturday 11 April 2026 03:45:43 +0000 (0:00:00.090) 0:03:58.727 ******** 2026-04-11 03:46:39.275269 | orchestrator | 2026-04-11 03:46:39.275279 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-11 03:46:39.275288 | orchestrator | Saturday 11 April 2026 03:45:43 +0000 (0:00:00.086) 0:03:58.814 ******** 2026-04-11 03:46:39.275298 | orchestrator | 2026-04-11 03:46:39.275307 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-11 03:46:39.275329 | orchestrator | Saturday 11 April 2026 03:45:43 +0000 (0:00:00.076) 0:03:58.891 ******** 2026-04-11 03:46:39.275339 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.275348 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:46:39.275358 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:46:39.275367 | orchestrator | 2026-04-11 03:46:39.275377 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-11 03:46:39.275387 | orchestrator | Saturday 11 April 2026 03:45:56 +0000 (0:00:13.468) 0:04:12.360 ******** 2026-04-11 03:46:39.275396 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.275406 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:46:39.275415 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:46:39.275425 | orchestrator | 2026-04-11 03:46:39.275434 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-11 03:46:39.275444 | orchestrator | Saturday 11 April 2026 03:46:08 +0000 (0:00:11.564) 0:04:23.924 ******** 2026-04-11 03:46:39.275453 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:46:39.275463 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.275473 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:46:39.275482 | orchestrator | 2026-04-11 03:46:39.275492 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-11 03:46:39.275502 | orchestrator | Saturday 11 April 2026 03:46:19 +0000 (0:00:10.806) 0:04:34.731 ******** 2026-04-11 03:46:39.275511 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.275521 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:46:39.275530 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:46:39.275540 | orchestrator | 2026-04-11 03:46:39.275550 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-11 03:46:39.275560 | orchestrator | Saturday 11 April 2026 03:46:30 +0000 (0:00:11.033) 0:04:45.764 ******** 2026-04-11 03:46:39.275569 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:46:39.275583 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:46:39.275598 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:46:39.275622 | orchestrator | 2026-04-11 03:46:39.275641 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:46:39.275658 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 03:46:39.275698 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 03:46:39.275714 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 03:46:39.275731 | orchestrator | 2026-04-11 03:46:39.275741 | orchestrator | 2026-04-11 03:46:39.275750 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:46:39.275760 | orchestrator | Saturday 11 April 2026 03:46:39 +0000 (0:00:08.880) 0:04:54.644 ******** 2026-04-11 03:46:39.275770 | orchestrator | =============================================================================== 2026-04-11 03:46:39.275788 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.85s 2026-04-11 03:46:39.275798 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.48s 2026-04-11 03:46:39.275808 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.47s 2026-04-11 03:46:39.275817 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.46s 2026-04-11 03:46:39.275827 | orchestrator | octavia : Restart octavia-api container -------------------------------- 13.47s 2026-04-11 03:46:39.275837 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.56s 2026-04-11 03:46:39.275852 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 11.03s 2026-04-11 03:46:39.275868 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.98s 2026-04-11 03:46:39.275904 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.81s 2026-04-11 03:46:39.275921 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.88s 2026-04-11 03:46:39.275937 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.25s 2026-04-11 03:46:39.275953 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.35s 2026-04-11 03:46:39.275968 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.84s 2026-04-11 03:46:39.275984 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.35s 2026-04-11 03:46:39.276010 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.65s 2026-04-11 03:46:39.650614 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.58s 2026-04-11 03:46:39.650718 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.41s 2026-04-11 03:46:39.650732 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.37s 2026-04-11 03:46:39.650744 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.25s 2026-04-11 03:46:39.650756 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.15s 2026-04-11 03:46:42.296792 | orchestrator | 2026-04-11 03:46:42 | INFO  | Task 69bd9e43-8524-4b00-becc-df3e6835d11a (ceilometer) was prepared for execution. 2026-04-11 03:46:42.296896 | orchestrator | 2026-04-11 03:46:42 | INFO  | It takes a moment until task 69bd9e43-8524-4b00-becc-df3e6835d11a (ceilometer) has been started and output is visible here. 2026-04-11 03:47:07.109488 | orchestrator | 2026-04-11 03:47:07.109602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:47:07.109622 | orchestrator | 2026-04-11 03:47:07.109634 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:47:07.109645 | orchestrator | Saturday 11 April 2026 03:46:47 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-04-11 03:47:07.109655 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:47:07.109665 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:47:07.109675 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:47:07.109685 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:47:07.109695 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:47:07.109704 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:47:07.109715 | orchestrator | 2026-04-11 03:47:07.109725 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:47:07.109736 | orchestrator | Saturday 11 April 2026 03:46:47 +0000 (0:00:00.819) 0:00:01.138 ******** 2026-04-11 03:47:07.109748 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-11 03:47:07.109759 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-11 03:47:07.109770 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-11 03:47:07.109780 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-11 03:47:07.109790 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-11 03:47:07.109802 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-11 03:47:07.109809 | orchestrator | 2026-04-11 03:47:07.109815 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-11 03:47:07.109822 | orchestrator | 2026-04-11 03:47:07.109828 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-11 03:47:07.109835 | orchestrator | Saturday 11 April 2026 03:46:48 +0000 (0:00:00.691) 0:00:01.829 ******** 2026-04-11 03:47:07.109843 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:47:07.109851 | orchestrator | 2026-04-11 03:47:07.109857 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-04-11 03:47:07.109863 | orchestrator | Saturday 11 April 2026 03:46:49 +0000 (0:00:01.339) 0:00:03.169 ******** 2026-04-11 03:47:07.109892 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:07.109899 | orchestrator | 2026-04-11 03:47:07.109905 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-04-11 03:47:07.109911 | orchestrator | Saturday 11 April 2026 03:46:50 +0000 (0:00:00.120) 0:00:03.290 ******** 2026-04-11 03:47:07.109917 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:07.109924 | orchestrator | 2026-04-11 03:47:07.109930 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-04-11 03:47:07.109936 | orchestrator | Saturday 11 April 2026 03:46:50 +0000 (0:00:00.149) 0:00:03.439 ******** 2026-04-11 03:47:07.109943 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:47:07.109949 | orchestrator | 2026-04-11 03:47:07.109955 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-04-11 03:47:07.109961 | orchestrator | Saturday 11 April 2026 03:46:54 +0000 (0:00:03.965) 0:00:07.405 ******** 2026-04-11 03:47:07.109980 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:47:07.109986 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-04-11 03:47:07.109993 | orchestrator | 2026-04-11 03:47:07.109999 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-04-11 03:47:07.110005 | orchestrator | Saturday 11 April 2026 03:46:58 +0000 (0:00:04.032) 0:00:11.438 ******** 2026-04-11 03:47:07.110012 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:47:07.110069 | orchestrator | 2026-04-11 03:47:07.110077 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-04-11 03:47:07.110085 | orchestrator | Saturday 11 April 2026 03:47:01 +0000 (0:00:03.217) 0:00:14.655 ******** 2026-04-11 03:47:07.110092 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-04-11 03:47:07.110099 | orchestrator | 2026-04-11 03:47:07.110107 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-04-11 03:47:07.110114 | orchestrator | Saturday 11 April 2026 03:47:05 +0000 (0:00:04.059) 0:00:18.714 ******** 2026-04-11 03:47:07.110121 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:07.110129 | orchestrator | 2026-04-11 03:47:07.110137 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-11 03:47:07.110147 | orchestrator | Saturday 11 April 2026 03:47:05 +0000 (0:00:00.155) 0:00:18.870 ******** 2026-04-11 03:47:07.110190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:07.110247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:07.110271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:07.110315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:07.110337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:07.110349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:07.110360 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:07.110379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:12.216190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:12.216299 | orchestrator | 2026-04-11 03:47:12.216329 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-11 03:47:12.216359 | orchestrator | Saturday 11 April 2026 03:47:07 +0000 (0:00:01.493) 0:00:20.364 ******** 2026-04-11 03:47:12.216374 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 03:47:12.216386 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:47:12.216398 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 03:47:12.216409 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 03:47:12.216420 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:47:12.216430 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 03:47:12.216441 | orchestrator | 2026-04-11 03:47:12.216452 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-11 03:47:12.216465 | orchestrator | Saturday 11 April 2026 03:47:08 +0000 (0:00:01.737) 0:00:22.101 ******** 2026-04-11 03:47:12.216478 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:47:12.216490 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:47:12.216501 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:47:12.216513 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:47:12.216524 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:47:12.216536 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:47:12.216547 | orchestrator | 2026-04-11 03:47:12.216558 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-11 03:47:12.216571 | orchestrator | Saturday 11 April 2026 03:47:09 +0000 (0:00:00.689) 0:00:22.791 ******** 2026-04-11 03:47:12.216583 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:12.216597 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:12.216605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:12.216612 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:12.216620 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:12.216627 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:12.216634 | orchestrator | 2026-04-11 03:47:12.216641 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-11 03:47:12.216650 | orchestrator | Saturday 11 April 2026 03:47:10 +0000 (0:00:00.841) 0:00:23.633 ******** 2026-04-11 03:47:12.216657 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:47:12.216665 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:47:12.216672 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:47:12.216679 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:47:12.216686 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:47:12.216693 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:47:12.216702 | orchestrator | 2026-04-11 03:47:12.216751 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-11 03:47:12.216760 | orchestrator | Saturday 11 April 2026 03:47:11 +0000 (0:00:00.689) 0:00:24.323 ******** 2026-04-11 03:47:12.216771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:12.216781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:12.216800 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:12.216825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:12.216834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:12.216843 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:12.216852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:12.216865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:12.216876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:12.216885 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:12.216898 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:12.216907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:12.216915 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:12.216930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:17.404234 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:17.404403 | orchestrator | 2026-04-11 03:47:17.404435 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-11 03:47:17.404455 | orchestrator | Saturday 11 April 2026 03:47:12 +0000 (0:00:01.147) 0:00:25.470 ******** 2026-04-11 03:47:17.404476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:17.404501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:17.404519 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:17.404560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:17.404583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:17.404634 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:17.404656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:17.404675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:17.404693 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:17.404740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:17.404762 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:17.404780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:17.404797 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:17.404823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:17.404855 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:17.404872 | orchestrator | 2026-04-11 03:47:17.404891 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-11 03:47:17.404910 | orchestrator | Saturday 11 April 2026 03:47:13 +0000 (0:00:00.913) 0:00:26.383 ******** 2026-04-11 03:47:17.404926 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:47:17.404942 | orchestrator | 2026-04-11 03:47:17.404959 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-11 03:47:17.404977 | orchestrator | Saturday 11 April 2026 03:47:13 +0000 (0:00:00.772) 0:00:27.155 ******** 2026-04-11 03:47:17.404994 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:47:17.405012 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:47:17.405029 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:47:17.405046 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:47:17.405062 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:47:17.405077 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:47:17.405094 | orchestrator | 2026-04-11 03:47:17.405110 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-11 03:47:17.405127 | orchestrator | Saturday 11 April 2026 03:47:14 +0000 (0:00:00.911) 0:00:28.067 ******** 2026-04-11 03:47:17.405142 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:47:17.405188 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:47:17.405206 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:47:17.405221 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:47:17.405236 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:47:17.405252 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:47:17.405267 | orchestrator | 2026-04-11 03:47:17.405284 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-11 03:47:17.405300 | orchestrator | Saturday 11 April 2026 03:47:15 +0000 (0:00:01.064) 0:00:29.132 ******** 2026-04-11 03:47:17.405316 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:17.405333 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:17.405349 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:17.405365 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:17.405380 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:17.405389 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:17.405399 | orchestrator | 2026-04-11 03:47:17.405409 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-11 03:47:17.405419 | orchestrator | Saturday 11 April 2026 03:47:16 +0000 (0:00:00.857) 0:00:29.989 ******** 2026-04-11 03:47:17.405429 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:17.405438 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:17.405448 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:17.405458 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:17.405467 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:17.405477 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:17.405487 | orchestrator | 2026-04-11 03:47:22.877026 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-11 03:47:22.877130 | orchestrator | Saturday 11 April 2026 03:47:17 +0000 (0:00:00.674) 0:00:30.664 ******** 2026-04-11 03:47:22.877144 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 03:47:22.877240 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:47:22.877260 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 03:47:22.877276 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:47:22.877286 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 03:47:22.877296 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 03:47:22.877332 | orchestrator | 2026-04-11 03:47:22.877342 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-11 03:47:22.877380 | orchestrator | Saturday 11 April 2026 03:47:19 +0000 (0:00:01.750) 0:00:32.414 ******** 2026-04-11 03:47:22.877394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:22.877421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:22.877433 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:22.877443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:22.877454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:22.877464 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:22.877474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:22.877502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:22.877521 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:22.877532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:22.877543 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:22.877558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:22.877569 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:22.877579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:22.877588 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:22.877598 | orchestrator | 2026-04-11 03:47:22.877608 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-11 03:47:22.877618 | orchestrator | Saturday 11 April 2026 03:47:19 +0000 (0:00:00.839) 0:00:33.254 ******** 2026-04-11 03:47:22.877628 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:22.877637 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:22.877647 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:22.877656 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:22.877666 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:22.877675 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:22.877685 | orchestrator | 2026-04-11 03:47:22.877694 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-11 03:47:22.877704 | orchestrator | Saturday 11 April 2026 03:47:20 +0000 (0:00:00.888) 0:00:34.143 ******** 2026-04-11 03:47:22.877713 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 03:47:22.877723 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:47:22.877732 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 03:47:22.877742 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 03:47:22.877751 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:47:22.877762 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 03:47:22.877788 | orchestrator | 2026-04-11 03:47:22.877797 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-11 03:47:22.877807 | orchestrator | Saturday 11 April 2026 03:47:22 +0000 (0:00:01.421) 0:00:35.564 ******** 2026-04-11 03:47:22.877825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.133459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:29.133564 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:29.133579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.133604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:29.133613 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:29.133621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.133630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:29.133659 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:29.133669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.133678 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:29.133702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.133708 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:29.133716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.133721 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:29.133726 | orchestrator | 2026-04-11 03:47:29.133732 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-11 03:47:29.133738 | orchestrator | Saturday 11 April 2026 03:47:23 +0000 (0:00:01.151) 0:00:36.716 ******** 2026-04-11 03:47:29.133743 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:29.133747 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:29.133752 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:29.133757 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:29.133761 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:29.133766 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:29.133770 | orchestrator | 2026-04-11 03:47:29.133775 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-11 03:47:29.133780 | orchestrator | Saturday 11 April 2026 03:47:24 +0000 (0:00:00.933) 0:00:37.650 ******** 2026-04-11 03:47:29.133784 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:29.133790 | orchestrator | 2026-04-11 03:47:29.133794 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-11 03:47:29.133799 | orchestrator | Saturday 11 April 2026 03:47:24 +0000 (0:00:00.159) 0:00:37.810 ******** 2026-04-11 03:47:29.133804 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:29.133813 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:29.133818 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:29.133823 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:29.133827 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:29.133832 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:29.133836 | orchestrator | 2026-04-11 03:47:29.133841 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-11 03:47:29.133846 | orchestrator | Saturday 11 April 2026 03:47:25 +0000 (0:00:00.644) 0:00:38.454 ******** 2026-04-11 03:47:29.133852 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:47:29.133858 | orchestrator | 2026-04-11 03:47:29.133862 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-11 03:47:29.133867 | orchestrator | Saturday 11 April 2026 03:47:26 +0000 (0:00:01.431) 0:00:39.886 ******** 2026-04-11 03:47:29.133872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:29.133881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:29.728885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:29.729003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:29.729024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:29.729062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:29.729076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:29.729088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:29.729124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:29.729151 | orchestrator | 2026-04-11 03:47:29.729242 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-11 03:47:29.729262 | orchestrator | Saturday 11 April 2026 03:47:29 +0000 (0:00:02.504) 0:00:42.390 ******** 2026-04-11 03:47:29.729293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.729328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:29.729347 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:29.729366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.729386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:29.729405 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:29.729425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:29.729461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:31.834914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:31.835067 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:31.835106 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:31.835122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:31.835135 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:31.835148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:31.835213 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:31.835222 | orchestrator | 2026-04-11 03:47:31.835231 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-11 03:47:31.835240 | orchestrator | Saturday 11 April 2026 03:47:30 +0000 (0:00:00.943) 0:00:43.333 ******** 2026-04-11 03:47:31.835248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:31.835258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:31.835266 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:31.835291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:31.835314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:31.835323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:31.835331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:31.835339 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:31.835346 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:31.835355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:31.835369 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:31.835380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:31.835391 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:31.835410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:39.890866 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:39.890948 | orchestrator | 2026-04-11 03:47:39.890959 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-11 03:47:39.890967 | orchestrator | Saturday 11 April 2026 03:47:31 +0000 (0:00:01.750) 0:00:45.084 ******** 2026-04-11 03:47:39.891051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:39.891065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:39.891073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:39.891080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:39.891089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:39.891127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:39.891139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:39.891215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:39.891226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:39.891232 | orchestrator | 2026-04-11 03:47:39.891239 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-11 03:47:39.891245 | orchestrator | Saturday 11 April 2026 03:47:34 +0000 (0:00:02.693) 0:00:47.778 ******** 2026-04-11 03:47:39.891253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:39.891261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:39.891282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.142262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.143365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.143469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.143495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:50.143515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:50.143567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:50.143586 | orchestrator | 2026-04-11 03:47:50.143604 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-11 03:47:50.143650 | orchestrator | Saturday 11 April 2026 03:47:39 +0000 (0:00:05.368) 0:00:53.147 ******** 2026-04-11 03:47:50.143668 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:47:50.143685 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 03:47:50.143700 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 03:47:50.143715 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:47:50.143748 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 03:47:50.143782 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 03:47:50.143799 | orchestrator | 2026-04-11 03:47:50.143815 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-11 03:47:50.143832 | orchestrator | Saturday 11 April 2026 03:47:41 +0000 (0:00:01.694) 0:00:54.841 ******** 2026-04-11 03:47:50.143848 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:50.143864 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:50.143880 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:50.143896 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:50.143911 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:50.143925 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:50.143941 | orchestrator | 2026-04-11 03:47:50.143957 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-11 03:47:50.143973 | orchestrator | Saturday 11 April 2026 03:47:42 +0000 (0:00:00.692) 0:00:55.534 ******** 2026-04-11 03:47:50.143989 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:50.144005 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:50.144021 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:50.144036 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:47:50.144051 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:47:50.144067 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:47:50.144082 | orchestrator | 2026-04-11 03:47:50.144099 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-11 03:47:50.144114 | orchestrator | Saturday 11 April 2026 03:47:44 +0000 (0:00:01.747) 0:00:57.282 ******** 2026-04-11 03:47:50.144131 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:50.144146 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:50.144193 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:50.144210 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:47:50.144224 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:47:50.144239 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:47:50.144253 | orchestrator | 2026-04-11 03:47:50.144269 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-11 03:47:50.144284 | orchestrator | Saturday 11 April 2026 03:47:45 +0000 (0:00:01.435) 0:00:58.718 ******** 2026-04-11 03:47:50.144299 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:47:50.144413 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 03:47:50.144427 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 03:47:50.144436 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:47:50.144445 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 03:47:50.144453 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 03:47:50.144462 | orchestrator | 2026-04-11 03:47:50.144477 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-11 03:47:50.144492 | orchestrator | Saturday 11 April 2026 03:47:47 +0000 (0:00:01.876) 0:01:00.594 ******** 2026-04-11 03:47:50.144511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.144531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.144567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.775205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.775312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.775352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:47:50.775366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:50.775379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:50.775391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:47:50.775403 | orchestrator | 2026-04-11 03:47:50.775416 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-11 03:47:50.775429 | orchestrator | Saturday 11 April 2026 03:47:50 +0000 (0:00:02.802) 0:01:03.397 ******** 2026-04-11 03:47:50.775466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:50.775479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:50.775507 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:50.775520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:50.775531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:50.775543 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:50.775554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:50.775566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:50.775592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:52.947804 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:52.947907 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:52.947925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:52.947965 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:52.947978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:52.947989 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:52.948001 | orchestrator | 2026-04-11 03:47:52.948013 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-11 03:47:52.948025 | orchestrator | Saturday 11 April 2026 03:47:51 +0000 (0:00:00.978) 0:01:04.375 ******** 2026-04-11 03:47:52.948037 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:52.948055 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:52.948081 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:52.948102 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:52.948119 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:52.948137 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:47:52.948152 | orchestrator | 2026-04-11 03:47:52.948196 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-11 03:47:52.948215 | orchestrator | Saturday 11 April 2026 03:47:51 +0000 (0:00:00.875) 0:01:05.251 ******** 2026-04-11 03:47:52.948235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:52.948258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:52.948279 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:47:52.948338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:52.948366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:52.948379 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:47:52.948392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 03:47:52.948406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 03:47:52.948418 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:47:52.948432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:52.948445 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:47:52.948458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:47:52.948471 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:47:52.948496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-11 03:48:22.333811 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:48:22.333916 | orchestrator | 2026-04-11 03:48:22.333930 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-04-11 03:48:22.333942 | orchestrator | Saturday 11 April 2026 03:47:52 +0000 (0:00:00.952) 0:01:06.203 ******** 2026-04-11 03:48:22.333953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:48:22.333967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:48:22.333979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:48:22.333988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 03:48:22.333998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:48:22.334108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-11 03:48:22.334123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:48:22.334134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:48:22.334143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 03:48:22.334153 | orchestrator | 2026-04-11 03:48:22.334205 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-11 03:48:22.334230 | orchestrator | Saturday 11 April 2026 03:47:54 +0000 (0:00:01.985) 0:01:08.189 ******** 2026-04-11 03:48:22.334240 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:48:22.334249 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:48:22.334258 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:48:22.334266 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:48:22.334279 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:48:22.334295 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:48:22.334309 | orchestrator | 2026-04-11 03:48:22.334325 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-11 03:48:22.334340 | orchestrator | Saturday 11 April 2026 03:47:55 +0000 (0:00:00.679) 0:01:08.869 ******** 2026-04-11 03:48:22.334356 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:48:22.334383 | orchestrator | 2026-04-11 03:48:22.334399 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 03:48:22.334416 | orchestrator | Saturday 11 April 2026 03:48:00 +0000 (0:00:04.872) 0:01:13.741 ******** 2026-04-11 03:48:22.334431 | orchestrator | 2026-04-11 03:48:22.334447 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 03:48:22.334464 | orchestrator | Saturday 11 April 2026 03:48:00 +0000 (0:00:00.075) 0:01:13.817 ******** 2026-04-11 03:48:22.334480 | orchestrator | 2026-04-11 03:48:22.334496 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 03:48:22.334513 | orchestrator | Saturday 11 April 2026 03:48:00 +0000 (0:00:00.073) 0:01:13.891 ******** 2026-04-11 03:48:22.334528 | orchestrator | 2026-04-11 03:48:22.334544 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 03:48:22.334561 | orchestrator | Saturday 11 April 2026 03:48:00 +0000 (0:00:00.274) 0:01:14.165 ******** 2026-04-11 03:48:22.334577 | orchestrator | 2026-04-11 03:48:22.334591 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 03:48:22.334601 | orchestrator | Saturday 11 April 2026 03:48:00 +0000 (0:00:00.083) 0:01:14.249 ******** 2026-04-11 03:48:22.334611 | orchestrator | 2026-04-11 03:48:22.334621 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 03:48:22.334631 | orchestrator | Saturday 11 April 2026 03:48:01 +0000 (0:00:00.076) 0:01:14.325 ******** 2026-04-11 03:48:22.334641 | orchestrator | 2026-04-11 03:48:22.334650 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-11 03:48:22.334667 | orchestrator | Saturday 11 April 2026 03:48:01 +0000 (0:00:00.074) 0:01:14.399 ******** 2026-04-11 03:48:22.334677 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:48:22.334687 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:48:22.334697 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:48:22.334707 | orchestrator | 2026-04-11 03:48:22.334716 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-11 03:48:22.334725 | orchestrator | Saturday 11 April 2026 03:48:12 +0000 (0:00:11.197) 0:01:25.597 ******** 2026-04-11 03:48:22.334734 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:48:22.334751 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:48:33.966932 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:48:33.967032 | orchestrator | 2026-04-11 03:48:33.967044 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-11 03:48:33.967054 | orchestrator | Saturday 11 April 2026 03:48:22 +0000 (0:00:09.989) 0:01:35.586 ******** 2026-04-11 03:48:33.967060 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:48:33.967067 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:48:33.967075 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:48:33.967082 | orchestrator | 2026-04-11 03:48:33.967089 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:48:33.967098 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-11 03:48:33.967106 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 03:48:33.967114 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 03:48:33.967121 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-11 03:48:33.967128 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-11 03:48:33.967135 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-11 03:48:33.967196 | orchestrator | 2026-04-11 03:48:33.967204 | orchestrator | 2026-04-11 03:48:33.967210 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:48:33.967217 | orchestrator | Saturday 11 April 2026 03:48:33 +0000 (0:00:11.096) 0:01:46.683 ******** 2026-04-11 03:48:33.967222 | orchestrator | =============================================================================== 2026-04-11 03:48:33.967228 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 11.20s 2026-04-11 03:48:33.967234 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.10s 2026-04-11 03:48:33.967239 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.99s 2026-04-11 03:48:33.967246 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.37s 2026-04-11 03:48:33.967252 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.87s 2026-04-11 03:48:33.967258 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.06s 2026-04-11 03:48:33.967265 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 4.03s 2026-04-11 03:48:33.967271 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.97s 2026-04-11 03:48:33.967277 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.22s 2026-04-11 03:48:33.967284 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.80s 2026-04-11 03:48:33.967291 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.69s 2026-04-11 03:48:33.967297 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.50s 2026-04-11 03:48:33.967303 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.99s 2026-04-11 03:48:33.967309 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.88s 2026-04-11 03:48:33.967315 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.75s 2026-04-11 03:48:33.967321 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.75s 2026-04-11 03:48:33.967327 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.75s 2026-04-11 03:48:33.967333 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.74s 2026-04-11 03:48:33.967339 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.69s 2026-04-11 03:48:33.967345 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.49s 2026-04-11 03:48:36.611160 | orchestrator | 2026-04-11 03:48:36 | INFO  | Task b59247e3-2c86-4903-b9ca-2c04bde52216 (aodh) was prepared for execution. 2026-04-11 03:48:36.612327 | orchestrator | 2026-04-11 03:48:36 | INFO  | It takes a moment until task b59247e3-2c86-4903-b9ca-2c04bde52216 (aodh) has been started and output is visible here. 2026-04-11 03:49:08.974598 | orchestrator | 2026-04-11 03:49:08.974719 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:49:08.974738 | orchestrator | 2026-04-11 03:49:08.974763 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:49:08.974777 | orchestrator | Saturday 11 April 2026 03:48:41 +0000 (0:00:00.308) 0:00:00.308 ******** 2026-04-11 03:49:08.974795 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:49:08.974811 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:49:08.974821 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:49:08.974831 | orchestrator | 2026-04-11 03:49:08.974841 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:49:08.974851 | orchestrator | Saturday 11 April 2026 03:48:41 +0000 (0:00:00.356) 0:00:00.664 ******** 2026-04-11 03:49:08.974861 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-11 03:49:08.974871 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-11 03:49:08.974881 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-11 03:49:08.974899 | orchestrator | 2026-04-11 03:49:08.974941 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-11 03:49:08.974952 | orchestrator | 2026-04-11 03:49:08.974961 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-11 03:49:08.974976 | orchestrator | Saturday 11 April 2026 03:48:42 +0000 (0:00:00.468) 0:00:01.133 ******** 2026-04-11 03:49:08.974993 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:49:08.975010 | orchestrator | 2026-04-11 03:49:08.975029 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-04-11 03:49:08.975046 | orchestrator | Saturday 11 April 2026 03:48:42 +0000 (0:00:00.592) 0:00:01.726 ******** 2026-04-11 03:49:08.975063 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-04-11 03:49:08.975079 | orchestrator | 2026-04-11 03:49:08.975103 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-04-11 03:49:08.975120 | orchestrator | Saturday 11 April 2026 03:48:46 +0000 (0:00:03.375) 0:00:05.101 ******** 2026-04-11 03:49:08.975136 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-04-11 03:49:08.975152 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-04-11 03:49:08.975197 | orchestrator | 2026-04-11 03:49:08.975216 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-04-11 03:49:08.975234 | orchestrator | Saturday 11 April 2026 03:48:52 +0000 (0:00:06.573) 0:00:11.675 ******** 2026-04-11 03:49:08.975250 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:49:08.975267 | orchestrator | 2026-04-11 03:49:08.975283 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-04-11 03:49:08.975298 | orchestrator | Saturday 11 April 2026 03:48:56 +0000 (0:00:03.424) 0:00:15.099 ******** 2026-04-11 03:49:08.975314 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:49:08.975331 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-04-11 03:49:08.975346 | orchestrator | 2026-04-11 03:49:08.975361 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-04-11 03:49:08.975375 | orchestrator | Saturday 11 April 2026 03:48:59 +0000 (0:00:03.890) 0:00:18.990 ******** 2026-04-11 03:49:08.975391 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:49:08.975408 | orchestrator | 2026-04-11 03:49:08.975425 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-04-11 03:49:08.975441 | orchestrator | Saturday 11 April 2026 03:49:03 +0000 (0:00:03.116) 0:00:22.106 ******** 2026-04-11 03:49:08.975458 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-04-11 03:49:08.975474 | orchestrator | 2026-04-11 03:49:08.975489 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-11 03:49:08.975503 | orchestrator | Saturday 11 April 2026 03:49:06 +0000 (0:00:03.751) 0:00:25.858 ******** 2026-04-11 03:49:08.975522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:08.975581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:08.975618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:08.975637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:08.975656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:08.975674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:08.975687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:08.975715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:10.402634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:10.402731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:10.402746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:10.402757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:10.402768 | orchestrator | 2026-04-11 03:49:10.402779 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-11 03:49:10.402791 | orchestrator | Saturday 11 April 2026 03:49:08 +0000 (0:00:02.152) 0:00:28.011 ******** 2026-04-11 03:49:10.402802 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:49:10.402814 | orchestrator | 2026-04-11 03:49:10.402825 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-11 03:49:10.402836 | orchestrator | Saturday 11 April 2026 03:49:09 +0000 (0:00:00.160) 0:00:28.172 ******** 2026-04-11 03:49:10.402846 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:49:10.402857 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:49:10.402868 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:49:10.402878 | orchestrator | 2026-04-11 03:49:10.402889 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-11 03:49:10.402900 | orchestrator | Saturday 11 April 2026 03:49:09 +0000 (0:00:00.560) 0:00:28.733 ******** 2026-04-11 03:49:10.402967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:10.403009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:10.403023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:10.403035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:10.403047 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:49:10.403058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:10.403070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:10.403088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:10.403114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:15.506882 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:49:15.506989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:15.507010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:15.507024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:15.507037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:15.507076 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:49:15.507089 | orchestrator | 2026-04-11 03:49:15.507100 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-11 03:49:15.507112 | orchestrator | Saturday 11 April 2026 03:49:10 +0000 (0:00:00.712) 0:00:29.445 ******** 2026-04-11 03:49:15.507122 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:49:15.507133 | orchestrator | 2026-04-11 03:49:15.507143 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-11 03:49:15.507154 | orchestrator | Saturday 11 April 2026 03:49:11 +0000 (0:00:00.809) 0:00:30.255 ******** 2026-04-11 03:49:15.507237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:15.507275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:15.507288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:15.507299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:15.507322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:15.507334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:15.507353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:15.507375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:16.259719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:16.259832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:16.259857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:16.259909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:16.259928 | orchestrator | 2026-04-11 03:49:16.259949 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-11 03:49:16.259968 | orchestrator | Saturday 11 April 2026 03:49:15 +0000 (0:00:04.300) 0:00:34.555 ******** 2026-04-11 03:49:16.259990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:16.260029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:16.260074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:16.260095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:16.260115 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:49:16.260149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:16.260162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:16.260202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:16.260220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:16.260233 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:49:16.260256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:17.446769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:17.446906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:17.446927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:17.446943 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:49:17.446974 | orchestrator | 2026-04-11 03:49:17.447001 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-11 03:49:17.447018 | orchestrator | Saturday 11 April 2026 03:49:16 +0000 (0:00:00.740) 0:00:35.296 ******** 2026-04-11 03:49:17.447033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:17.447064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:17.447078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:17.447113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:17.447148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:49:17.447163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:17.447211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:17.447227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:17.447249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:17.447263 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:49:17.447286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-11 03:49:21.643556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 03:49:21.643663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 03:49:21.643679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 03:49:21.643692 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:49:21.643705 | orchestrator | 2026-04-11 03:49:21.643718 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-11 03:49:21.643731 | orchestrator | Saturday 11 April 2026 03:49:17 +0000 (0:00:01.196) 0:00:36.493 ******** 2026-04-11 03:49:21.643743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:21.643773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:21.643866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:21.643881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:21.643893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:21.643904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:21.643916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:21.643933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:21.643953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:21.643973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:30.723375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724361 | orchestrator | 2026-04-11 03:49:30.724371 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-11 03:49:30.724380 | orchestrator | Saturday 11 April 2026 03:49:21 +0000 (0:00:04.196) 0:00:40.690 ******** 2026-04-11 03:49:30.724390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:30.724414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:30.724442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:30.724470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:30.724543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028302 | orchestrator | 2026-04-11 03:49:36.028321 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-11 03:49:36.028334 | orchestrator | Saturday 11 April 2026 03:49:30 +0000 (0:00:09.073) 0:00:49.764 ******** 2026-04-11 03:49:36.028344 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:49:36.028355 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:49:36.028365 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:49:36.028375 | orchestrator | 2026-04-11 03:49:36.028385 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-04-11 03:49:36.028395 | orchestrator | Saturday 11 April 2026 03:49:32 +0000 (0:00:01.927) 0:00:51.692 ******** 2026-04-11 03:49:36.028422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:36.028459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:36.028478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-11 03:49:36.028525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:49:36.028687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:50:36.296005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 03:50:36.296125 | orchestrator | 2026-04-11 03:50:36.296144 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-11 03:50:36.296158 | orchestrator | Saturday 11 April 2026 03:49:36 +0000 (0:00:03.376) 0:00:55.068 ******** 2026-04-11 03:50:36.296196 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:50:36.296208 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:50:36.296219 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:50:36.296230 | orchestrator | 2026-04-11 03:50:36.296240 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-04-11 03:50:36.296278 | orchestrator | Saturday 11 April 2026 03:49:36 +0000 (0:00:00.375) 0:00:55.443 ******** 2026-04-11 03:50:36.296288 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:50:36.296296 | orchestrator | 2026-04-11 03:50:36.296306 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-04-11 03:50:36.296315 | orchestrator | Saturday 11 April 2026 03:49:38 +0000 (0:00:02.138) 0:00:57.582 ******** 2026-04-11 03:50:36.296323 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:50:36.296331 | orchestrator | 2026-04-11 03:50:36.296340 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-11 03:50:36.296348 | orchestrator | Saturday 11 April 2026 03:49:40 +0000 (0:00:02.284) 0:00:59.866 ******** 2026-04-11 03:50:36.296357 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:50:36.296366 | orchestrator | 2026-04-11 03:50:36.296375 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-11 03:50:36.296384 | orchestrator | Saturday 11 April 2026 03:49:53 +0000 (0:00:12.937) 0:01:12.803 ******** 2026-04-11 03:50:36.296394 | orchestrator | 2026-04-11 03:50:36.296403 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-11 03:50:36.296426 | orchestrator | Saturday 11 April 2026 03:49:53 +0000 (0:00:00.073) 0:01:12.876 ******** 2026-04-11 03:50:36.296436 | orchestrator | 2026-04-11 03:50:36.296445 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-11 03:50:36.296454 | orchestrator | Saturday 11 April 2026 03:49:53 +0000 (0:00:00.071) 0:01:12.948 ******** 2026-04-11 03:50:36.296465 | orchestrator | 2026-04-11 03:50:36.296476 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-11 03:50:36.296486 | orchestrator | Saturday 11 April 2026 03:49:54 +0000 (0:00:00.298) 0:01:13.247 ******** 2026-04-11 03:50:36.296495 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:50:36.296504 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:50:36.296513 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:50:36.296523 | orchestrator | 2026-04-11 03:50:36.296533 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-11 03:50:36.296542 | orchestrator | Saturday 11 April 2026 03:50:05 +0000 (0:00:11.224) 0:01:24.471 ******** 2026-04-11 03:50:36.296552 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:50:36.296560 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:50:36.296570 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:50:36.296579 | orchestrator | 2026-04-11 03:50:36.296589 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-11 03:50:36.296599 | orchestrator | Saturday 11 April 2026 03:50:16 +0000 (0:00:10.875) 0:01:35.346 ******** 2026-04-11 03:50:36.296609 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:50:36.296619 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:50:36.296629 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:50:36.296638 | orchestrator | 2026-04-11 03:50:36.296648 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-11 03:50:36.296657 | orchestrator | Saturday 11 April 2026 03:50:24 +0000 (0:00:08.475) 0:01:43.822 ******** 2026-04-11 03:50:36.296667 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:50:36.296676 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:50:36.296686 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:50:36.296695 | orchestrator | 2026-04-11 03:50:36.296704 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:50:36.296716 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 03:50:36.296727 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 03:50:36.296737 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 03:50:36.296760 | orchestrator | 2026-04-11 03:50:36.296770 | orchestrator | 2026-04-11 03:50:36.296779 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:50:36.296789 | orchestrator | Saturday 11 April 2026 03:50:35 +0000 (0:00:11.129) 0:01:54.952 ******** 2026-04-11 03:50:36.296798 | orchestrator | =============================================================================== 2026-04-11 03:50:36.296809 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.94s 2026-04-11 03:50:36.296818 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 11.22s 2026-04-11 03:50:36.296851 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 11.13s 2026-04-11 03:50:36.296861 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.88s 2026-04-11 03:50:36.296872 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.07s 2026-04-11 03:50:36.296883 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 8.48s 2026-04-11 03:50:36.296892 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.57s 2026-04-11 03:50:36.296903 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.30s 2026-04-11 03:50:36.296912 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.20s 2026-04-11 03:50:36.296921 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.89s 2026-04-11 03:50:36.296930 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.75s 2026-04-11 03:50:36.296940 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.42s 2026-04-11 03:50:36.296951 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.38s 2026-04-11 03:50:36.296961 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.38s 2026-04-11 03:50:36.296972 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.12s 2026-04-11 03:50:36.296982 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.28s 2026-04-11 03:50:36.296993 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.15s 2026-04-11 03:50:36.297004 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.14s 2026-04-11 03:50:36.297015 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.93s 2026-04-11 03:50:36.297025 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.20s 2026-04-11 03:50:38.878563 | orchestrator | 2026-04-11 03:50:38 | INFO  | Task 48e7e3ea-b967-4ff6-bd73-d9dcd352a834 (kolla-ceph-rgw) was prepared for execution. 2026-04-11 03:50:38.878695 | orchestrator | 2026-04-11 03:50:38 | INFO  | It takes a moment until task 48e7e3ea-b967-4ff6-bd73-d9dcd352a834 (kolla-ceph-rgw) has been started and output is visible here. 2026-04-11 03:51:17.681768 | orchestrator | 2026-04-11 03:51:17.681871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:51:17.681882 | orchestrator | 2026-04-11 03:51:17.681889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:51:17.681896 | orchestrator | Saturday 11 April 2026 03:50:43 +0000 (0:00:00.344) 0:00:00.344 ******** 2026-04-11 03:51:17.681902 | orchestrator | ok: [testbed-manager] 2026-04-11 03:51:17.681913 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:51:17.681924 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:51:17.681934 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:51:17.681941 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:51:17.681947 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:51:17.681953 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:51:17.681960 | orchestrator | 2026-04-11 03:51:17.681966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:51:17.681973 | orchestrator | Saturday 11 April 2026 03:50:44 +0000 (0:00:00.961) 0:00:01.305 ******** 2026-04-11 03:51:17.681979 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-11 03:51:17.682003 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-11 03:51:17.682010 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-11 03:51:17.682061 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-11 03:51:17.682068 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-11 03:51:17.682074 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-11 03:51:17.682080 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-11 03:51:17.682086 | orchestrator | 2026-04-11 03:51:17.682092 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-11 03:51:17.682099 | orchestrator | 2026-04-11 03:51:17.682105 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-11 03:51:17.682111 | orchestrator | Saturday 11 April 2026 03:50:45 +0000 (0:00:01.000) 0:00:02.305 ******** 2026-04-11 03:51:17.682118 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:51:17.682126 | orchestrator | 2026-04-11 03:51:17.682132 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-11 03:51:17.682138 | orchestrator | Saturday 11 April 2026 03:50:47 +0000 (0:00:01.840) 0:00:04.145 ******** 2026-04-11 03:51:17.682144 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-11 03:51:17.682176 | orchestrator | 2026-04-11 03:51:17.682187 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-11 03:51:17.682194 | orchestrator | Saturday 11 April 2026 03:50:51 +0000 (0:00:04.126) 0:00:08.272 ******** 2026-04-11 03:51:17.682201 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-11 03:51:17.682209 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-11 03:51:17.682215 | orchestrator | 2026-04-11 03:51:17.682221 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-11 03:51:17.682227 | orchestrator | Saturday 11 April 2026 03:50:58 +0000 (0:00:06.580) 0:00:14.853 ******** 2026-04-11 03:51:17.682233 | orchestrator | ok: [testbed-manager] => (item=service) 2026-04-11 03:51:17.682240 | orchestrator | 2026-04-11 03:51:17.682246 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-11 03:51:17.682252 | orchestrator | Saturday 11 April 2026 03:51:01 +0000 (0:00:03.349) 0:00:18.203 ******** 2026-04-11 03:51:17.682258 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:51:17.682265 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-11 03:51:17.682272 | orchestrator | 2026-04-11 03:51:17.682283 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-11 03:51:17.682289 | orchestrator | Saturday 11 April 2026 03:51:05 +0000 (0:00:03.916) 0:00:22.119 ******** 2026-04-11 03:51:17.682296 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-11 03:51:17.682302 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-11 03:51:17.682308 | orchestrator | 2026-04-11 03:51:17.682314 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-11 03:51:17.682320 | orchestrator | Saturday 11 April 2026 03:51:11 +0000 (0:00:06.522) 0:00:28.642 ******** 2026-04-11 03:51:17.682327 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-11 03:51:17.682334 | orchestrator | 2026-04-11 03:51:17.682342 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:51:17.682354 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:17.682362 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:17.682376 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:17.682383 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:17.682390 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:17.682411 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:17.682423 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:17.682431 | orchestrator | 2026-04-11 03:51:17.682438 | orchestrator | 2026-04-11 03:51:17.682445 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:51:17.682452 | orchestrator | Saturday 11 April 2026 03:51:17 +0000 (0:00:05.213) 0:00:33.855 ******** 2026-04-11 03:51:17.682459 | orchestrator | =============================================================================== 2026-04-11 03:51:17.682466 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.58s 2026-04-11 03:51:17.682474 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.52s 2026-04-11 03:51:17.682481 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.21s 2026-04-11 03:51:17.682488 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.13s 2026-04-11 03:51:17.682495 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.92s 2026-04-11 03:51:17.682502 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.35s 2026-04-11 03:51:17.682510 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.84s 2026-04-11 03:51:17.682516 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2026-04-11 03:51:17.682524 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.96s 2026-04-11 03:51:20.337405 | orchestrator | 2026-04-11 03:51:20 | INFO  | Task f5a88c5d-e9a5-45e6-8456-28e658014a46 (gnocchi) was prepared for execution. 2026-04-11 03:51:20.337493 | orchestrator | 2026-04-11 03:51:20 | INFO  | It takes a moment until task f5a88c5d-e9a5-45e6-8456-28e658014a46 (gnocchi) has been started and output is visible here. 2026-04-11 03:51:26.368671 | orchestrator | 2026-04-11 03:51:26.368764 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:51:26.368776 | orchestrator | 2026-04-11 03:51:26.368785 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:51:26.368792 | orchestrator | Saturday 11 April 2026 03:51:25 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-04-11 03:51:26.368798 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:51:26.368805 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:51:26.368811 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:51:26.368817 | orchestrator | 2026-04-11 03:51:26.368823 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:51:26.368829 | orchestrator | Saturday 11 April 2026 03:51:25 +0000 (0:00:00.360) 0:00:00.661 ******** 2026-04-11 03:51:26.368835 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-11 03:51:26.368841 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-11 03:51:26.368848 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-11 03:51:26.368855 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-11 03:51:26.368861 | orchestrator | 2026-04-11 03:51:26.368868 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-11 03:51:26.368875 | orchestrator | skipping: no hosts matched 2026-04-11 03:51:26.368882 | orchestrator | 2026-04-11 03:51:26.368917 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:51:26.368926 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:26.368934 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:26.368941 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:51:26.368947 | orchestrator | 2026-04-11 03:51:26.368953 | orchestrator | 2026-04-11 03:51:26.368959 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:51:26.368965 | orchestrator | Saturday 11 April 2026 03:51:25 +0000 (0:00:00.424) 0:00:01.085 ******** 2026-04-11 03:51:26.368972 | orchestrator | =============================================================================== 2026-04-11 03:51:26.368979 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-04-11 03:51:26.368986 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-04-11 03:51:28.938361 | orchestrator | 2026-04-11 03:51:28 | INFO  | Task c2d2c512-468c-408c-a226-512a38111c43 (manila) was prepared for execution. 2026-04-11 03:51:28.938457 | orchestrator | 2026-04-11 03:51:28 | INFO  | It takes a moment until task c2d2c512-468c-408c-a226-512a38111c43 (manila) has been started and output is visible here. 2026-04-11 03:52:10.843471 | orchestrator | 2026-04-11 03:52:10.843591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:52:10.843606 | orchestrator | 2026-04-11 03:52:10.843619 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:52:10.843631 | orchestrator | Saturday 11 April 2026 03:51:33 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-04-11 03:52:10.843642 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:52:10.843654 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:52:10.843665 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:52:10.843676 | orchestrator | 2026-04-11 03:52:10.843688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:52:10.843699 | orchestrator | Saturday 11 April 2026 03:51:33 +0000 (0:00:00.339) 0:00:00.613 ******** 2026-04-11 03:52:10.843727 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-11 03:52:10.843738 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-11 03:52:10.843750 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-11 03:52:10.843761 | orchestrator | 2026-04-11 03:52:10.843772 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-11 03:52:10.843782 | orchestrator | 2026-04-11 03:52:10.843794 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-11 03:52:10.843805 | orchestrator | Saturday 11 April 2026 03:51:34 +0000 (0:00:00.526) 0:00:01.140 ******** 2026-04-11 03:52:10.843816 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:52:10.843828 | orchestrator | 2026-04-11 03:52:10.843839 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-11 03:52:10.843850 | orchestrator | Saturday 11 April 2026 03:51:35 +0000 (0:00:00.636) 0:00:01.776 ******** 2026-04-11 03:52:10.843861 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:52:10.843873 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:52:10.843884 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:52:10.843895 | orchestrator | 2026-04-11 03:52:10.843906 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-04-11 03:52:10.843917 | orchestrator | Saturday 11 April 2026 03:51:35 +0000 (0:00:00.524) 0:00:02.301 ******** 2026-04-11 03:52:10.843928 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-04-11 03:52:10.843939 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-04-11 03:52:10.843975 | orchestrator | 2026-04-11 03:52:10.843987 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-04-11 03:52:10.843999 | orchestrator | Saturday 11 April 2026 03:51:42 +0000 (0:00:06.371) 0:00:08.673 ******** 2026-04-11 03:52:10.844012 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-04-11 03:52:10.844025 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-04-11 03:52:10.844037 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-04-11 03:52:10.844049 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-04-11 03:52:10.844062 | orchestrator | 2026-04-11 03:52:10.844100 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-04-11 03:52:10.844115 | orchestrator | Saturday 11 April 2026 03:51:54 +0000 (0:00:12.456) 0:00:21.129 ******** 2026-04-11 03:52:10.844128 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 03:52:10.844140 | orchestrator | 2026-04-11 03:52:10.844153 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-04-11 03:52:10.844165 | orchestrator | Saturday 11 April 2026 03:51:57 +0000 (0:00:03.184) 0:00:24.314 ******** 2026-04-11 03:52:10.844177 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 03:52:10.844189 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-04-11 03:52:10.844202 | orchestrator | 2026-04-11 03:52:10.844214 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-04-11 03:52:10.844227 | orchestrator | Saturday 11 April 2026 03:52:01 +0000 (0:00:03.929) 0:00:28.244 ******** 2026-04-11 03:52:10.844239 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 03:52:10.844253 | orchestrator | 2026-04-11 03:52:10.844266 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-04-11 03:52:10.844278 | orchestrator | Saturday 11 April 2026 03:52:04 +0000 (0:00:03.139) 0:00:31.383 ******** 2026-04-11 03:52:10.844291 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-04-11 03:52:10.844303 | orchestrator | 2026-04-11 03:52:10.844315 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-11 03:52:10.844327 | orchestrator | Saturday 11 April 2026 03:52:08 +0000 (0:00:03.845) 0:00:35.229 ******** 2026-04-11 03:52:10.844362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:10.844385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:10.844406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:10.844418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:10.844431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:10.844442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:10.844462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:22.239464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:22.239652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:22.239680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:22.239698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:22.239715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:22.239731 | orchestrator | 2026-04-11 03:52:22.239750 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-11 03:52:22.239769 | orchestrator | Saturday 11 April 2026 03:52:10 +0000 (0:00:02.316) 0:00:37.545 ******** 2026-04-11 03:52:22.239785 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:52:22.239800 | orchestrator | 2026-04-11 03:52:22.239810 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-11 03:52:22.239819 | orchestrator | Saturday 11 April 2026 03:52:11 +0000 (0:00:00.574) 0:00:38.120 ******** 2026-04-11 03:52:22.239828 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:52:22.239860 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:52:22.239869 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:52:22.239878 | orchestrator | 2026-04-11 03:52:22.239887 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-11 03:52:22.239895 | orchestrator | Saturday 11 April 2026 03:52:12 +0000 (0:00:01.118) 0:00:39.238 ******** 2026-04-11 03:52:22.239906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 03:52:22.239946 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 03:52:22.239966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 03:52:22.239977 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 03:52:22.239987 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 03:52:22.239996 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 03:52:22.240006 | orchestrator | 2026-04-11 03:52:22.240017 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-11 03:52:22.240027 | orchestrator | Saturday 11 April 2026 03:52:14 +0000 (0:00:01.928) 0:00:41.167 ******** 2026-04-11 03:52:22.240037 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 03:52:22.240048 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 03:52:22.240057 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 03:52:22.240101 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 03:52:22.240112 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 03:52:22.240122 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 03:52:22.240131 | orchestrator | 2026-04-11 03:52:22.240141 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-11 03:52:22.240150 | orchestrator | Saturday 11 April 2026 03:52:15 +0000 (0:00:01.295) 0:00:42.462 ******** 2026-04-11 03:52:22.240161 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-11 03:52:22.240171 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-11 03:52:22.240181 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-11 03:52:22.240190 | orchestrator | 2026-04-11 03:52:22.240200 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-11 03:52:22.240209 | orchestrator | Saturday 11 April 2026 03:52:16 +0000 (0:00:00.770) 0:00:43.233 ******** 2026-04-11 03:52:22.240219 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:52:22.240228 | orchestrator | 2026-04-11 03:52:22.240239 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-11 03:52:22.240248 | orchestrator | Saturday 11 April 2026 03:52:16 +0000 (0:00:00.171) 0:00:43.405 ******** 2026-04-11 03:52:22.240258 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:52:22.240269 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:52:22.240278 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:52:22.240288 | orchestrator | 2026-04-11 03:52:22.240305 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-11 03:52:22.240321 | orchestrator | Saturday 11 April 2026 03:52:17 +0000 (0:00:00.575) 0:00:43.981 ******** 2026-04-11 03:52:22.240343 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:52:22.240360 | orchestrator | 2026-04-11 03:52:22.240375 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-11 03:52:22.240390 | orchestrator | Saturday 11 April 2026 03:52:17 +0000 (0:00:00.615) 0:00:44.597 ******** 2026-04-11 03:52:22.240418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:23.195509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:23.195598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:23.195611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:23.195753 | orchestrator | 2026-04-11 03:52:23.195764 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-11 03:52:23.195776 | orchestrator | Saturday 11 April 2026 03:52:22 +0000 (0:00:04.348) 0:00:48.945 ******** 2026-04-11 03:52:23.195798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:23.868425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868590 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:52:23.868605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:23.868618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868688 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:52:23.868700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:23.868712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:23.868755 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:52:23.868766 | orchestrator | 2026-04-11 03:52:23.868778 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-11 03:52:23.868791 | orchestrator | Saturday 11 April 2026 03:52:23 +0000 (0:00:00.958) 0:00:49.903 ******** 2026-04-11 03:52:23.868816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:28.726600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726772 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:52:28.726786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:28.726799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726867 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:52:28.726879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:28.726899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:28.726935 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:52:28.726947 | orchestrator | 2026-04-11 03:52:28.726959 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-11 03:52:28.726972 | orchestrator | Saturday 11 April 2026 03:52:24 +0000 (0:00:00.950) 0:00:50.854 ******** 2026-04-11 03:52:28.726997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:36.047474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:36.047604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:36.047619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:36.047721 | orchestrator | 2026-04-11 03:52:36.047733 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-11 03:52:36.047741 | orchestrator | Saturday 11 April 2026 03:52:29 +0000 (0:00:04.824) 0:00:55.678 ******** 2026-04-11 03:52:36.047754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:40.678921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:40.679008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:52:40.679019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:40.679028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:40.679126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:40.679173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:40.679179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:40.679183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:40.679189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:40.679193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:40.679202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:52:40.679206 | orchestrator | 2026-04-11 03:52:40.679212 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-11 03:52:40.679222 | orchestrator | Saturday 11 April 2026 03:52:36 +0000 (0:00:07.071) 0:01:02.750 ******** 2026-04-11 03:52:40.679227 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-11 03:52:40.679231 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-11 03:52:40.679235 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-11 03:52:40.679239 | orchestrator | 2026-04-11 03:52:40.679243 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-11 03:52:40.679247 | orchestrator | Saturday 11 April 2026 03:52:40 +0000 (0:00:03.886) 0:01:06.637 ******** 2026-04-11 03:52:40.679255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:44.040532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040738 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:52:44.040776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:44.040816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040877 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:52:44.040889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-11 03:52:44.040901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 03:52:44.040951 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:52:44.040963 | orchestrator | 2026-04-11 03:52:44.040975 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-04-11 03:52:44.040988 | orchestrator | Saturday 11 April 2026 03:52:40 +0000 (0:00:00.745) 0:01:07.382 ******** 2026-04-11 03:52:44.041010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:53:26.928903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:53:26.929074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-11 03:53:26.929123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 03:53:26.929221 | orchestrator | 2026-04-11 03:53:26.929229 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-04-11 03:53:26.929237 | orchestrator | Saturday 11 April 2026 03:52:44 +0000 (0:00:03.367) 0:01:10.750 ******** 2026-04-11 03:53:26.929244 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:53:26.929252 | orchestrator | 2026-04-11 03:53:26.929259 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-04-11 03:53:26.929266 | orchestrator | Saturday 11 April 2026 03:52:46 +0000 (0:00:02.155) 0:01:12.905 ******** 2026-04-11 03:53:26.929272 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:53:26.929279 | orchestrator | 2026-04-11 03:53:26.929285 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-11 03:53:26.929292 | orchestrator | Saturday 11 April 2026 03:52:48 +0000 (0:00:02.307) 0:01:15.212 ******** 2026-04-11 03:53:26.929299 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:53:26.929305 | orchestrator | 2026-04-11 03:53:26.929312 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-11 03:53:26.929319 | orchestrator | Saturday 11 April 2026 03:53:26 +0000 (0:00:38.083) 0:01:53.296 ******** 2026-04-11 03:53:26.929326 | orchestrator | 2026-04-11 03:53:26.929336 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-11 03:54:13.354222 | orchestrator | Saturday 11 April 2026 03:53:26 +0000 (0:00:00.080) 0:01:53.376 ******** 2026-04-11 03:54:13.354343 | orchestrator | 2026-04-11 03:54:13.354360 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-11 03:54:13.354372 | orchestrator | Saturday 11 April 2026 03:53:26 +0000 (0:00:00.076) 0:01:53.453 ******** 2026-04-11 03:54:13.354383 | orchestrator | 2026-04-11 03:54:13.354395 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-11 03:54:13.354406 | orchestrator | Saturday 11 April 2026 03:53:26 +0000 (0:00:00.076) 0:01:53.529 ******** 2026-04-11 03:54:13.354417 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:54:13.354428 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:54:13.354440 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:54:13.354478 | orchestrator | 2026-04-11 03:54:13.354491 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-11 03:54:13.354502 | orchestrator | Saturday 11 April 2026 03:53:37 +0000 (0:00:10.773) 0:02:04.302 ******** 2026-04-11 03:54:13.354512 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:54:13.354524 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:54:13.354534 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:54:13.354545 | orchestrator | 2026-04-11 03:54:13.354557 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-11 03:54:13.354593 | orchestrator | Saturday 11 April 2026 03:53:44 +0000 (0:00:06.333) 0:02:10.636 ******** 2026-04-11 03:54:13.354604 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:54:13.354615 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:54:13.354625 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:54:13.354636 | orchestrator | 2026-04-11 03:54:13.354647 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-11 03:54:13.354658 | orchestrator | Saturday 11 April 2026 03:53:54 +0000 (0:00:10.178) 0:02:20.815 ******** 2026-04-11 03:54:13.354669 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:54:13.354680 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:54:13.354691 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:54:13.354702 | orchestrator | 2026-04-11 03:54:13.354713 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:54:13.354726 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 03:54:13.354738 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 03:54:13.354749 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 03:54:13.354762 | orchestrator | 2026-04-11 03:54:13.354775 | orchestrator | 2026-04-11 03:54:13.354788 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:54:13.354801 | orchestrator | Saturday 11 April 2026 03:54:12 +0000 (0:00:18.648) 0:02:39.464 ******** 2026-04-11 03:54:13.354828 | orchestrator | =============================================================================== 2026-04-11 03:54:13.354840 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 38.08s 2026-04-11 03:54:13.354853 | orchestrator | manila : Restart manila-share container -------------------------------- 18.65s 2026-04-11 03:54:13.354866 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.46s 2026-04-11 03:54:13.354878 | orchestrator | manila : Restart manila-api container ---------------------------------- 10.77s 2026-04-11 03:54:13.354890 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.18s 2026-04-11 03:54:13.354903 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.07s 2026-04-11 03:54:13.354916 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.37s 2026-04-11 03:54:13.354928 | orchestrator | manila : Restart manila-data container ---------------------------------- 6.33s 2026-04-11 03:54:13.355016 | orchestrator | manila : Copying over config.json files for services -------------------- 4.82s 2026-04-11 03:54:13.355041 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.35s 2026-04-11 03:54:13.355064 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.93s 2026-04-11 03:54:13.355084 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.89s 2026-04-11 03:54:13.355102 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.85s 2026-04-11 03:54:13.355120 | orchestrator | manila : Check manila containers ---------------------------------------- 3.37s 2026-04-11 03:54:13.355139 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.18s 2026-04-11 03:54:13.355172 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.14s 2026-04-11 03:54:13.355192 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.32s 2026-04-11 03:54:13.355212 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.31s 2026-04-11 03:54:13.355234 | orchestrator | manila : Creating Manila database --------------------------------------- 2.16s 2026-04-11 03:54:13.355254 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.93s 2026-04-11 03:54:13.750416 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-04-11 03:54:26.116579 | orchestrator | 2026-04-11 03:54:26 | INFO  | Task 4dd2a1d0-7057-464a-a8a0-644fdb5780af (netdata) was prepared for execution. 2026-04-11 03:54:26.116676 | orchestrator | 2026-04-11 03:54:26 | INFO  | It takes a moment until task 4dd2a1d0-7057-464a-a8a0-644fdb5780af (netdata) has been started and output is visible here. 2026-04-11 03:55:46.339563 | orchestrator | 2026-04-11 03:55:46.339638 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:55:46.339646 | orchestrator | 2026-04-11 03:55:46.339650 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:55:46.339655 | orchestrator | Saturday 11 April 2026 03:54:31 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-04-11 03:55:46.339659 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-11 03:55:46.339664 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-11 03:55:46.339668 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-11 03:55:46.339672 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-11 03:55:46.339676 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-11 03:55:46.339680 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-11 03:55:46.339684 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-11 03:55:46.339688 | orchestrator | 2026-04-11 03:55:46.339692 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-11 03:55:46.339695 | orchestrator | 2026-04-11 03:55:46.339699 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-11 03:55:46.339703 | orchestrator | Saturday 11 April 2026 03:54:32 +0000 (0:00:01.077) 0:00:01.348 ******** 2026-04-11 03:55:46.339708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:55:46.339714 | orchestrator | 2026-04-11 03:55:46.339718 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-11 03:55:46.339722 | orchestrator | Saturday 11 April 2026 03:54:33 +0000 (0:00:01.489) 0:00:02.838 ******** 2026-04-11 03:55:46.339726 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:55:46.339731 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:55:46.339735 | orchestrator | ok: [testbed-manager] 2026-04-11 03:55:46.339739 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:55:46.339743 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:55:46.339747 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:55:46.339751 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:55:46.339755 | orchestrator | 2026-04-11 03:55:46.339759 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-11 03:55:46.339763 | orchestrator | Saturday 11 April 2026 03:54:35 +0000 (0:00:01.976) 0:00:04.814 ******** 2026-04-11 03:55:46.339766 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:55:46.339770 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:55:46.339774 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:55:46.339778 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:55:46.339782 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:55:46.339785 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:55:46.339789 | orchestrator | ok: [testbed-manager] 2026-04-11 03:55:46.339812 | orchestrator | 2026-04-11 03:55:46.339819 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-11 03:55:46.339839 | orchestrator | Saturday 11 April 2026 03:54:38 +0000 (0:00:02.315) 0:00:07.130 ******** 2026-04-11 03:55:46.339845 | orchestrator | changed: [testbed-manager] 2026-04-11 03:55:46.339851 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:55:46.339858 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:55:46.339939 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:55:46.339946 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:55:46.339952 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:55:46.339958 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:55:46.339964 | orchestrator | 2026-04-11 03:55:46.339970 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-11 03:55:46.339975 | orchestrator | Saturday 11 April 2026 03:54:39 +0000 (0:00:01.706) 0:00:08.837 ******** 2026-04-11 03:55:46.339981 | orchestrator | changed: [testbed-manager] 2026-04-11 03:55:46.339987 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:55:46.339993 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:55:46.339999 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:55:46.340006 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:55:46.340012 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:55:46.340017 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:55:46.340024 | orchestrator | 2026-04-11 03:55:46.340029 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-11 03:55:46.340034 | orchestrator | Saturday 11 April 2026 03:54:54 +0000 (0:00:15.192) 0:00:24.030 ******** 2026-04-11 03:55:46.340038 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:55:46.340043 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:55:46.340049 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:55:46.340055 | orchestrator | changed: [testbed-manager] 2026-04-11 03:55:46.340060 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:55:46.340068 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:55:46.340076 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:55:46.340083 | orchestrator | 2026-04-11 03:55:46.340088 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-11 03:55:46.340094 | orchestrator | Saturday 11 April 2026 03:55:19 +0000 (0:00:24.201) 0:00:48.232 ******** 2026-04-11 03:55:46.340102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:55:46.340109 | orchestrator | 2026-04-11 03:55:46.340115 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-11 03:55:46.340121 | orchestrator | Saturday 11 April 2026 03:55:20 +0000 (0:00:01.846) 0:00:50.078 ******** 2026-04-11 03:55:46.340127 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-11 03:55:46.340134 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-11 03:55:46.340140 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-11 03:55:46.340146 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-11 03:55:46.340169 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-11 03:55:46.340175 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-11 03:55:46.340181 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-11 03:55:46.340187 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-11 03:55:46.340193 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-11 03:55:46.340199 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-11 03:55:46.340205 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-11 03:55:46.340211 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-11 03:55:46.340217 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-11 03:55:46.340223 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-11 03:55:46.340239 | orchestrator | 2026-04-11 03:55:46.340245 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-11 03:55:46.340252 | orchestrator | Saturday 11 April 2026 03:55:24 +0000 (0:00:03.811) 0:00:53.889 ******** 2026-04-11 03:55:46.340259 | orchestrator | ok: [testbed-manager] 2026-04-11 03:55:46.340265 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:55:46.340270 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:55:46.340276 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:55:46.340282 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:55:46.340288 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:55:46.340294 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:55:46.340300 | orchestrator | 2026-04-11 03:55:46.340306 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-11 03:55:46.340312 | orchestrator | Saturday 11 April 2026 03:55:26 +0000 (0:00:01.414) 0:00:55.304 ******** 2026-04-11 03:55:46.340319 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:55:46.340325 | orchestrator | changed: [testbed-manager] 2026-04-11 03:55:46.340331 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:55:46.340336 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:55:46.340343 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:55:46.340349 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:55:46.340355 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:55:46.340361 | orchestrator | 2026-04-11 03:55:46.340367 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-11 03:55:46.340374 | orchestrator | Saturday 11 April 2026 03:55:27 +0000 (0:00:01.574) 0:00:56.878 ******** 2026-04-11 03:55:46.340380 | orchestrator | ok: [testbed-manager] 2026-04-11 03:55:46.340387 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:55:46.340393 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:55:46.340400 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:55:46.340406 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:55:46.340413 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:55:46.340419 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:55:46.340423 | orchestrator | 2026-04-11 03:55:46.340427 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-11 03:55:46.340431 | orchestrator | Saturday 11 April 2026 03:55:29 +0000 (0:00:01.340) 0:00:58.219 ******** 2026-04-11 03:55:46.340436 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:55:46.340440 | orchestrator | ok: [testbed-manager] 2026-04-11 03:55:46.340444 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:55:46.340449 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:55:46.340453 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:55:46.340465 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:55:46.340469 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:55:46.340473 | orchestrator | 2026-04-11 03:55:46.340478 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-11 03:55:46.340482 | orchestrator | Saturday 11 April 2026 03:55:30 +0000 (0:00:01.810) 0:01:00.030 ******** 2026-04-11 03:55:46.340488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-11 03:55:46.340497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:55:46.340506 | orchestrator | 2026-04-11 03:55:46.340516 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-11 03:55:46.340522 | orchestrator | Saturday 11 April 2026 03:55:32 +0000 (0:00:01.599) 0:01:01.629 ******** 2026-04-11 03:55:46.340527 | orchestrator | changed: [testbed-manager] 2026-04-11 03:55:46.340534 | orchestrator | 2026-04-11 03:55:46.340540 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-11 03:55:46.340545 | orchestrator | Saturday 11 April 2026 03:55:34 +0000 (0:00:02.422) 0:01:04.052 ******** 2026-04-11 03:55:46.340552 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:55:46.340565 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:55:46.340571 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:55:46.340578 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:55:46.340585 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:55:46.340592 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:55:46.340598 | orchestrator | changed: [testbed-manager] 2026-04-11 03:55:46.340604 | orchestrator | 2026-04-11 03:55:46.340614 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:55:46.340621 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:55:46.340628 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:55:46.340634 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:55:46.340640 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:55:46.340654 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:55:46.838676 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:55:46.838753 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 03:55:46.838761 | orchestrator | 2026-04-11 03:55:46.838768 | orchestrator | 2026-04-11 03:55:46.838775 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:55:46.838783 | orchestrator | Saturday 11 April 2026 03:55:46 +0000 (0:00:11.376) 0:01:15.428 ******** 2026-04-11 03:55:46.838810 | orchestrator | =============================================================================== 2026-04-11 03:55:46.838817 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.20s 2026-04-11 03:55:46.838823 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.19s 2026-04-11 03:55:46.838829 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.38s 2026-04-11 03:55:46.838835 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.81s 2026-04-11 03:55:46.838841 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.42s 2026-04-11 03:55:46.838847 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.32s 2026-04-11 03:55:46.838853 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.98s 2026-04-11 03:55:46.838859 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.85s 2026-04-11 03:55:46.838895 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.81s 2026-04-11 03:55:46.838901 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.71s 2026-04-11 03:55:46.838907 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.60s 2026-04-11 03:55:46.838913 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.57s 2026-04-11 03:55:46.838918 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.49s 2026-04-11 03:55:46.838924 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.41s 2026-04-11 03:55:46.838931 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.34s 2026-04-11 03:55:46.838937 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.08s 2026-04-11 03:55:51.174511 | orchestrator | 2026-04-11 03:55:51 | INFO  | Task 09cc0cb9-2fe8-484d-b96b-0c1f601ba311 (prometheus) was prepared for execution. 2026-04-11 03:55:51.174640 | orchestrator | 2026-04-11 03:55:51 | INFO  | It takes a moment until task 09cc0cb9-2fe8-484d-b96b-0c1f601ba311 (prometheus) has been started and output is visible here. 2026-04-11 03:56:01.795197 | orchestrator | 2026-04-11 03:56:01.795315 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:56:01.795333 | orchestrator | 2026-04-11 03:56:01.795345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:56:01.795356 | orchestrator | Saturday 11 April 2026 03:55:56 +0000 (0:00:00.347) 0:00:00.347 ******** 2026-04-11 03:56:01.795368 | orchestrator | ok: [testbed-manager] 2026-04-11 03:56:01.795380 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:56:01.795393 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:56:01.795404 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:56:01.795415 | orchestrator | ok: [testbed-node-3] 2026-04-11 03:56:01.795426 | orchestrator | ok: [testbed-node-4] 2026-04-11 03:56:01.795437 | orchestrator | ok: [testbed-node-5] 2026-04-11 03:56:01.795448 | orchestrator | 2026-04-11 03:56:01.795459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:56:01.795471 | orchestrator | Saturday 11 April 2026 03:55:57 +0000 (0:00:00.957) 0:00:01.304 ******** 2026-04-11 03:56:01.795482 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-11 03:56:01.795494 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-11 03:56:01.795505 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-11 03:56:01.795516 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-11 03:56:01.795526 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-11 03:56:01.795537 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-11 03:56:01.795548 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-11 03:56:01.795559 | orchestrator | 2026-04-11 03:56:01.795570 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-11 03:56:01.795581 | orchestrator | 2026-04-11 03:56:01.795592 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-11 03:56:01.795603 | orchestrator | Saturday 11 April 2026 03:55:58 +0000 (0:00:01.085) 0:00:02.390 ******** 2026-04-11 03:56:01.795615 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:56:01.795627 | orchestrator | 2026-04-11 03:56:01.795638 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-11 03:56:01.795649 | orchestrator | Saturday 11 April 2026 03:55:59 +0000 (0:00:01.499) 0:00:03.889 ******** 2026-04-11 03:56:01.795665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:01.795681 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-11 03:56:01.795720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:01.795750 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:01.795783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:01.795799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:01.795812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:01.795828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:01.795841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:01.795922 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:01.795946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:01.795972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:02.726641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:02.726777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:02.726805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:02.726824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:02.726847 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-11 03:56:02.726971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:02.727033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:02.727047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:02.727059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:02.727070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:02.727082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:02.727101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:02.727113 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:02.727124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:02.727169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:08.513064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:08.513162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:08.513175 | orchestrator | 2026-04-11 03:56:08.513185 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-11 03:56:08.513196 | orchestrator | Saturday 11 April 2026 03:56:02 +0000 (0:00:03.036) 0:00:06.926 ******** 2026-04-11 03:56:08.513205 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 03:56:08.513215 | orchestrator | 2026-04-11 03:56:08.513223 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-11 03:56:08.513231 | orchestrator | Saturday 11 April 2026 03:56:04 +0000 (0:00:01.846) 0:00:08.773 ******** 2026-04-11 03:56:08.513263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:08.513273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:08.513283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-11 03:56:08.513306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:08.513331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:08.513340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:08.513349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:08.513363 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:08.513372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:08.513380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:08.513392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:08.513416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:08.513446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781452 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:10.781502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:10.781512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:10.781546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781573 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-11 03:56:10.781592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:10.781635 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:10.781644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:10.781661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:11.775275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:11.775408 | orchestrator | 2026-04-11 03:56:11.775438 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-11 03:56:11.775460 | orchestrator | Saturday 11 April 2026 03:56:10 +0000 (0:00:06.201) 0:00:14.975 ******** 2026-04-11 03:56:11.775478 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-11 03:56:11.775493 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:11.775505 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:11.775649 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-11 03:56:11.775698 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:11.775736 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:56:11.775749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:11.775763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:11.775776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:11.775790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:11.775804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:11.775823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:11.775836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:11.775941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:12.578670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:12.578763 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:56:12.578775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:12.578783 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:56:12.578789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:12.578796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:12.578815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:12.578821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:12.578868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:12.578875 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:56:12.578893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:12.578899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:12.578905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 03:56:12.578910 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:56:12.578916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:12.578922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:12.578931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 03:56:12.578943 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:56:12.578949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:12.578963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:13.517793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 03:56:13.517915 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:56:13.517930 | orchestrator | 2026-04-11 03:56:13.517941 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-11 03:56:13.517951 | orchestrator | Saturday 11 April 2026 03:56:12 +0000 (0:00:01.799) 0:00:16.774 ******** 2026-04-11 03:56:13.517962 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-11 03:56:13.517970 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:13.517989 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:13.518057 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-11 03:56:13.518079 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:13.518086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:13.518099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:13.518104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:13.518116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:13.518121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:13.518136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:13.518141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:13.518151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:14.944580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:14.944671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:14.944699 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:56:14.944708 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:56:14.944715 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:56:14.944723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:14.944732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:14.944776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:14.944785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:14.944793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 03:56:14.944800 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:56:14.944821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:14.944828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:14.944834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 03:56:14.944867 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:56:14.944874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:14.944892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:14.944899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 03:56:14.944905 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:56:14.944911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 03:56:14.944924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 03:56:19.241762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 03:56:19.241905 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:56:19.241919 | orchestrator | 2026-04-11 03:56:19.241926 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-11 03:56:19.241933 | orchestrator | Saturday 11 April 2026 03:56:14 +0000 (0:00:02.363) 0:00:19.138 ******** 2026-04-11 03:56:19.241940 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-11 03:56:19.241965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:19.241982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:19.241987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:19.241992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:19.242009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:19.242054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:19.242059 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:56:19.242070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:19.242075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:19.242084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:19.242090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:19.242095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:19.242105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:22.029096 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:22.029202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:22.029213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:22.029222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:22.029241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:22.029249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:22.029256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:56:22.029277 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-11 03:56:22.029293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:22.029300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:22.029311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:56:22.029318 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:22.029325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:22.029332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:22.029345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:56:26.819755 | orchestrator | 2026-04-11 03:56:26.819934 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-11 03:56:26.819947 | orchestrator | Saturday 11 April 2026 03:56:22 +0000 (0:00:07.086) 0:00:26.224 ******** 2026-04-11 03:56:26.819954 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 03:56:26.819961 | orchestrator | 2026-04-11 03:56:26.819968 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-11 03:56:26.819975 | orchestrator | Saturday 11 April 2026 03:56:22 +0000 (0:00:00.987) 0:00:27.212 ******** 2026-04-11 03:56:26.819986 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103179, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0561914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.819997 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103179, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0561914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820019 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103179, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0561914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:26.820028 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103206, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0641916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820035 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103179, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0561914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820042 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103179, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0561914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820084 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103179, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0561914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820092 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103179, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0561914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820099 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103206, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0641916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820110 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103206, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0641916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820117 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103206, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0641916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820123 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103206, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0641916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820129 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103171, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0541914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:26.820146 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103206, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0641916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705492 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103171, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0541914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705576 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103206, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0641916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:28.705600 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103194, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0611916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705607 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103171, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0541914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705616 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103171, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0541914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705650 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103171, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0541914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705661 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103194, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0611916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705687 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103194, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0611916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705697 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103171, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0541914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705711 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103194, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0611916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705721 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103166, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0529766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705730 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103194, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0611916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705746 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103166, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0529766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705755 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103194, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0611916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:28.705770 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103166, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0529766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991638 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103166, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0529766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991756 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103166, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0529766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991775 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103184, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.057622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991785 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103184, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.057622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991820 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103166, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0529766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991890 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103171, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0541914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:30.991902 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103184, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.057622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991932 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103184, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.057622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991949 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103184, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.057622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991959 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103192, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0601914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991968 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103192, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0601914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991984 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103184, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.057622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.991994 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103192, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0601914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.992004 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103192, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0601914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:30.992021 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103192, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0601914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101472 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103186, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0579321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101619 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103192, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0601914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101660 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103186, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0579321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101669 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103186, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0579321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101678 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103177, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0557575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101686 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103186, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0579321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101693 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103186, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0579321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101719 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103194, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0611916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:33.101733 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103177, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0557575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101748 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103186, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0579321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101756 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103177, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0557575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101764 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103177, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0557575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101772 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103203, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0631917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101779 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103177, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0557575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:33.101791 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103177, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0557575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.204068 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103203, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0631917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247617 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103203, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0631917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247674 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103203, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0631917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247679 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103203, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0631917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247684 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103162, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0520263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247688 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103166, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0529766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:35.247693 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103203, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0631917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247732 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103162, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0520263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247738 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103162, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0520263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247742 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103224, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0681915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247746 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103162, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0520263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247750 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103162, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0520263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247754 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103224, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0681915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247758 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103224, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0681915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:35.247772 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103200, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0621915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082443 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103184, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.057622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:37.082510 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103224, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0681915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082521 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103224, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0681915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082529 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103162, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0520263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082538 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103200, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0621915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082547 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103169, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0536494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082581 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103200, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0621915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082601 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103200, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0621915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082610 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103224, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0681915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082619 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103200, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0621915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082627 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103169, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0536494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082635 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103169, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0536494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082642 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103200, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0621915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082665 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103164, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0521913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:37.082678 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103169, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0536494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888621 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103169, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0536494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888701 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103164, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0521913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888711 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103192, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0601914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:38.888718 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103164, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0521913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888739 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103169, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0536494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888755 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103164, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0521913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888762 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103191, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888780 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103191, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888787 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103164, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0521913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888793 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103188, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888800 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103191, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888811 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103218, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0680008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888818 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:56:38.888870 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103164, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0521913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888878 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103191, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:38.888889 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103188, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194273 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103191, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194391 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103188, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194410 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103188, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194450 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103191, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194477 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103188, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194489 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103218, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0680008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194501 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:56:45.194514 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103218, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0680008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194543 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:56:45.194555 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103218, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0680008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194566 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:56:45.194578 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103186, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0579321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:45.194601 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103218, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0680008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194613 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:56:45.194626 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103188, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194646 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103218, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0680008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-11 03:56:45.194662 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:56:45.194675 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103177, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0557575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:56:45.194696 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103203, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0631917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019422 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103162, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0520263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019562 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103224, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0681915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019576 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103200, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0621915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019587 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103169, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0536494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019611 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103164, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0521913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019621 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103191, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019631 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103188, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0591915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019657 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103218, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0680008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-11 03:57:14.019676 | orchestrator | 2026-04-11 03:57:14.019686 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-11 03:57:14.019698 | orchestrator | Saturday 11 April 2026 03:56:51 +0000 (0:00:28.995) 0:00:56.207 ******** 2026-04-11 03:57:14.019707 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 03:57:14.019717 | orchestrator | 2026-04-11 03:57:14.019725 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-11 03:57:14.019733 | orchestrator | Saturday 11 April 2026 03:56:52 +0000 (0:00:00.970) 0:00:57.178 ******** 2026-04-11 03:57:14.019742 | orchestrator | [WARNING]: Skipped 2026-04-11 03:57:14.019753 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.019763 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-11 03:57:14.019771 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.019780 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-11 03:57:14.019788 | orchestrator | [WARNING]: Skipped 2026-04-11 03:57:14.019796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.019932 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-11 03:57:14.019943 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.019951 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-11 03:57:14.019961 | orchestrator | [WARNING]: Skipped 2026-04-11 03:57:14.019969 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.019978 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-11 03:57:14.019987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.019995 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-11 03:57:14.020006 | orchestrator | [WARNING]: Skipped 2026-04-11 03:57:14.020016 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.020026 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-11 03:57:14.020036 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.020045 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-11 03:57:14.020055 | orchestrator | [WARNING]: Skipped 2026-04-11 03:57:14.020065 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.020083 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-11 03:57:14.020093 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.020101 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-11 03:57:14.020110 | orchestrator | [WARNING]: Skipped 2026-04-11 03:57:14.020118 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.020127 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-11 03:57:14.020136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.020145 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-11 03:57:14.020154 | orchestrator | [WARNING]: Skipped 2026-04-11 03:57:14.020165 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.020174 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-11 03:57:14.020183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 03:57:14.020193 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-11 03:57:14.020202 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 03:57:14.020221 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:57:14.020230 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 03:57:14.020239 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 03:57:14.020248 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 03:57:14.020256 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 03:57:14.020264 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 03:57:14.020273 | orchestrator | 2026-04-11 03:57:14.020282 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-11 03:57:14.020293 | orchestrator | Saturday 11 April 2026 03:56:54 +0000 (0:00:02.031) 0:00:59.210 ******** 2026-04-11 03:57:14.020302 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 03:57:14.020312 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:57:14.020321 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 03:57:14.020330 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:57:14.020339 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 03:57:14.020348 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:57:14.020370 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 03:57:32.975003 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975083 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 03:57:32.975090 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975095 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 03:57:32.975099 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975103 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-11 03:57:32.975108 | orchestrator | 2026-04-11 03:57:32.975112 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-11 03:57:32.975117 | orchestrator | Saturday 11 April 2026 03:57:13 +0000 (0:00:19.003) 0:01:18.213 ******** 2026-04-11 03:57:32.975121 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 03:57:32.975125 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:57:32.975129 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 03:57:32.975132 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:57:32.975136 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 03:57:32.975140 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:57:32.975144 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 03:57:32.975148 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975151 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 03:57:32.975156 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 03:57:32.975159 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975163 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975167 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-11 03:57:32.975171 | orchestrator | 2026-04-11 03:57:32.975175 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-11 03:57:32.975179 | orchestrator | Saturday 11 April 2026 03:57:17 +0000 (0:00:03.091) 0:01:21.304 ******** 2026-04-11 03:57:32.975183 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 03:57:32.975188 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:57:32.975208 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 03:57:32.975212 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:57:32.975216 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 03:57:32.975219 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:57:32.975234 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 03:57:32.975238 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975241 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-11 03:57:32.975246 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 03:57:32.975249 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975253 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 03:57:32.975257 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975261 | orchestrator | 2026-04-11 03:57:32.975265 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-11 03:57:32.975269 | orchestrator | Saturday 11 April 2026 03:57:19 +0000 (0:00:02.261) 0:01:23.566 ******** 2026-04-11 03:57:32.975273 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 03:57:32.975276 | orchestrator | 2026-04-11 03:57:32.975280 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-11 03:57:32.975285 | orchestrator | Saturday 11 April 2026 03:57:20 +0000 (0:00:00.867) 0:01:24.433 ******** 2026-04-11 03:57:32.975289 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:57:32.975292 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:57:32.975296 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:57:32.975300 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:57:32.975304 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975307 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975311 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975315 | orchestrator | 2026-04-11 03:57:32.975319 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-11 03:57:32.975323 | orchestrator | Saturday 11 April 2026 03:57:21 +0000 (0:00:00.836) 0:01:25.270 ******** 2026-04-11 03:57:32.975326 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:57:32.975330 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975334 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975338 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975342 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:57:32.975345 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:57:32.975349 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:57:32.975353 | orchestrator | 2026-04-11 03:57:32.975357 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-11 03:57:32.975370 | orchestrator | Saturday 11 April 2026 03:57:23 +0000 (0:00:02.383) 0:01:27.654 ******** 2026-04-11 03:57:32.975374 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 03:57:32.975378 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 03:57:32.975382 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:57:32.975386 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 03:57:32.975390 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 03:57:32.975393 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:57:32.975397 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:57:32.975405 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:57:32.975409 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 03:57:32.975412 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975416 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 03:57:32.975420 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975424 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 03:57:32.975428 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975432 | orchestrator | 2026-04-11 03:57:32.975435 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-11 03:57:32.975439 | orchestrator | Saturday 11 April 2026 03:57:25 +0000 (0:00:01.634) 0:01:29.288 ******** 2026-04-11 03:57:32.975443 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 03:57:32.975447 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:57:32.975451 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 03:57:32.975454 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:57:32.975458 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 03:57:32.975462 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 03:57:32.975466 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975470 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:57:32.975473 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 03:57:32.975477 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975481 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-11 03:57:32.975485 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 03:57:32.975488 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975492 | orchestrator | 2026-04-11 03:57:32.975499 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-11 03:57:32.975503 | orchestrator | Saturday 11 April 2026 03:57:26 +0000 (0:00:01.882) 0:01:31.171 ******** 2026-04-11 03:57:32.975507 | orchestrator | [WARNING]: Skipped 2026-04-11 03:57:32.975512 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-11 03:57:32.975516 | orchestrator | due to this access issue: 2026-04-11 03:57:32.975520 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-11 03:57:32.975524 | orchestrator | not a directory 2026-04-11 03:57:32.975527 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 03:57:32.975531 | orchestrator | 2026-04-11 03:57:32.975535 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-11 03:57:32.975539 | orchestrator | Saturday 11 April 2026 03:57:28 +0000 (0:00:01.298) 0:01:32.470 ******** 2026-04-11 03:57:32.975543 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:57:32.975546 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:57:32.975550 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:57:32.975554 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:57:32.975558 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975561 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975565 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975569 | orchestrator | 2026-04-11 03:57:32.975573 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-11 03:57:32.975576 | orchestrator | Saturday 11 April 2026 03:57:29 +0000 (0:00:01.065) 0:01:33.536 ******** 2026-04-11 03:57:32.975580 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:57:32.975589 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:57:32.975593 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:57:32.975597 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:57:32.975600 | orchestrator | skipping: [testbed-node-3] 2026-04-11 03:57:32.975604 | orchestrator | skipping: [testbed-node-4] 2026-04-11 03:57:32.975608 | orchestrator | skipping: [testbed-node-5] 2026-04-11 03:57:32.975611 | orchestrator | 2026-04-11 03:57:32.975615 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-11 03:57:32.975619 | orchestrator | Saturday 11 April 2026 03:57:30 +0000 (0:00:01.089) 0:01:34.625 ******** 2026-04-11 03:57:32.975629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:57:34.499247 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-11 03:57:34.500142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:57:34.500177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:57:34.500200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:57:34.500209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:57:34.500237 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:57:34.500245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 03:57:34.500271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:34.500279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:34.500285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:34.500293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:57:34.500305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:57:34.500312 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:57:34.500324 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:57:34.500337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:36.745417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:36.745502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:36.745510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:57:36.745528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:57:36.745533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 03:57:36.745554 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-11 03:57:36.745570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:57:36.745576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:57:36.745581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 03:57:36.745585 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:36.745593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:36.745603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:36.745608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 03:57:36.745613 | orchestrator | 2026-04-11 03:57:36.745618 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-11 03:57:36.745624 | orchestrator | Saturday 11 April 2026 03:57:34 +0000 (0:00:04.079) 0:01:38.704 ******** 2026-04-11 03:57:36.745629 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-11 03:57:36.745634 | orchestrator | skipping: [testbed-manager] 2026-04-11 03:57:36.745639 | orchestrator | 2026-04-11 03:57:36.745643 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 03:57:36.745648 | orchestrator | Saturday 11 April 2026 03:57:35 +0000 (0:00:01.425) 0:01:40.130 ******** 2026-04-11 03:57:36.745652 | orchestrator | 2026-04-11 03:57:36.745657 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 03:57:36.745661 | orchestrator | Saturday 11 April 2026 03:57:36 +0000 (0:00:00.283) 0:01:40.413 ******** 2026-04-11 03:57:36.745665 | orchestrator | 2026-04-11 03:57:36.745670 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 03:57:36.745674 | orchestrator | Saturday 11 April 2026 03:57:36 +0000 (0:00:00.089) 0:01:40.503 ******** 2026-04-11 03:57:36.745678 | orchestrator | 2026-04-11 03:57:36.745683 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 03:57:36.745690 | orchestrator | Saturday 11 April 2026 03:57:36 +0000 (0:00:00.075) 0:01:40.578 ******** 2026-04-11 03:59:20.817385 | orchestrator | 2026-04-11 03:59:20.817518 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 03:59:20.817574 | orchestrator | Saturday 11 April 2026 03:57:36 +0000 (0:00:00.069) 0:01:40.647 ******** 2026-04-11 03:59:20.817589 | orchestrator | 2026-04-11 03:59:20.817604 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 03:59:20.817619 | orchestrator | Saturday 11 April 2026 03:57:36 +0000 (0:00:00.097) 0:01:40.745 ******** 2026-04-11 03:59:20.817634 | orchestrator | 2026-04-11 03:59:20.817647 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 03:59:20.817661 | orchestrator | Saturday 11 April 2026 03:57:36 +0000 (0:00:00.074) 0:01:40.819 ******** 2026-04-11 03:59:20.817675 | orchestrator | 2026-04-11 03:59:20.817688 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-11 03:59:20.817703 | orchestrator | Saturday 11 April 2026 03:57:36 +0000 (0:00:00.114) 0:01:40.933 ******** 2026-04-11 03:59:20.817717 | orchestrator | changed: [testbed-manager] 2026-04-11 03:59:20.817791 | orchestrator | 2026-04-11 03:59:20.817807 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-11 03:59:20.817820 | orchestrator | Saturday 11 April 2026 03:58:03 +0000 (0:00:26.543) 0:02:07.477 ******** 2026-04-11 03:59:20.817833 | orchestrator | changed: [testbed-manager] 2026-04-11 03:59:20.817847 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:59:20.817894 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:59:20.817910 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:59:20.817923 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:59:20.817935 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:59:20.817947 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:59:20.817960 | orchestrator | 2026-04-11 03:59:20.817974 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-11 03:59:20.817988 | orchestrator | Saturday 11 April 2026 03:58:12 +0000 (0:00:08.919) 0:02:16.396 ******** 2026-04-11 03:59:20.818001 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:59:20.818082 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:59:20.818100 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:59:20.818114 | orchestrator | 2026-04-11 03:59:20.818129 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-11 03:59:20.818144 | orchestrator | Saturday 11 April 2026 03:58:23 +0000 (0:00:10.892) 0:02:27.289 ******** 2026-04-11 03:59:20.818159 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:59:20.818173 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:59:20.818187 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:59:20.818202 | orchestrator | 2026-04-11 03:59:20.818216 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-11 03:59:20.818229 | orchestrator | Saturday 11 April 2026 03:58:29 +0000 (0:00:06.147) 0:02:33.436 ******** 2026-04-11 03:59:20.818244 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:59:20.818259 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:59:20.818292 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:59:20.818307 | orchestrator | changed: [testbed-manager] 2026-04-11 03:59:20.818322 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:59:20.818338 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:59:20.818353 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:59:20.818368 | orchestrator | 2026-04-11 03:59:20.818383 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-11 03:59:20.818398 | orchestrator | Saturday 11 April 2026 03:58:43 +0000 (0:00:14.750) 0:02:48.187 ******** 2026-04-11 03:59:20.818413 | orchestrator | changed: [testbed-manager] 2026-04-11 03:59:20.818429 | orchestrator | 2026-04-11 03:59:20.818444 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-11 03:59:20.818461 | orchestrator | Saturday 11 April 2026 03:58:52 +0000 (0:00:08.919) 0:02:57.106 ******** 2026-04-11 03:59:20.818477 | orchestrator | changed: [testbed-node-0] 2026-04-11 03:59:20.818492 | orchestrator | changed: [testbed-node-2] 2026-04-11 03:59:20.818509 | orchestrator | changed: [testbed-node-1] 2026-04-11 03:59:20.818523 | orchestrator | 2026-04-11 03:59:20.818537 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-11 03:59:20.818553 | orchestrator | Saturday 11 April 2026 03:59:03 +0000 (0:00:10.713) 0:03:07.819 ******** 2026-04-11 03:59:20.818569 | orchestrator | changed: [testbed-manager] 2026-04-11 03:59:20.818584 | orchestrator | 2026-04-11 03:59:20.818598 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-11 03:59:20.818612 | orchestrator | Saturday 11 April 2026 03:59:09 +0000 (0:00:06.068) 0:03:13.888 ******** 2026-04-11 03:59:20.818625 | orchestrator | changed: [testbed-node-4] 2026-04-11 03:59:20.818640 | orchestrator | changed: [testbed-node-5] 2026-04-11 03:59:20.818656 | orchestrator | changed: [testbed-node-3] 2026-04-11 03:59:20.818670 | orchestrator | 2026-04-11 03:59:20.818684 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 03:59:20.818701 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-11 03:59:20.818719 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-11 03:59:20.818762 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-11 03:59:20.818800 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-11 03:59:20.818817 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-11 03:59:20.818867 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-11 03:59:20.818891 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-11 03:59:20.818906 | orchestrator | 2026-04-11 03:59:20.818920 | orchestrator | 2026-04-11 03:59:20.818934 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 03:59:20.818946 | orchestrator | Saturday 11 April 2026 03:59:20 +0000 (0:00:10.477) 0:03:24.366 ******** 2026-04-11 03:59:20.818959 | orchestrator | =============================================================================== 2026-04-11 03:59:20.818974 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 29.00s 2026-04-11 03:59:20.818988 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 26.54s 2026-04-11 03:59:20.819004 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.00s 2026-04-11 03:59:20.819019 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.75s 2026-04-11 03:59:20.819034 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.89s 2026-04-11 03:59:20.819049 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.71s 2026-04-11 03:59:20.819064 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.48s 2026-04-11 03:59:20.819078 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 8.92s 2026-04-11 03:59:20.819093 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.92s 2026-04-11 03:59:20.819107 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.09s 2026-04-11 03:59:20.819122 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.20s 2026-04-11 03:59:20.819136 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.15s 2026-04-11 03:59:20.819149 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.07s 2026-04-11 03:59:20.819162 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.08s 2026-04-11 03:59:20.819176 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.09s 2026-04-11 03:59:20.819191 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.04s 2026-04-11 03:59:20.819205 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.38s 2026-04-11 03:59:20.819220 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.36s 2026-04-11 03:59:20.819262 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.26s 2026-04-11 03:59:20.819278 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.03s 2026-04-11 03:59:25.569544 | orchestrator | 2026-04-11 03:59:25 | INFO  | Task c8022750-19f0-42a6-88f6-debe1eda24a5 (grafana) was prepared for execution. 2026-04-11 03:59:25.569656 | orchestrator | 2026-04-11 03:59:25 | INFO  | It takes a moment until task c8022750-19f0-42a6-88f6-debe1eda24a5 (grafana) has been started and output is visible here. 2026-04-11 03:59:36.392390 | orchestrator | 2026-04-11 03:59:36.392518 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 03:59:36.392534 | orchestrator | 2026-04-11 03:59:36.392544 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 03:59:36.392578 | orchestrator | Saturday 11 April 2026 03:59:30 +0000 (0:00:00.334) 0:00:00.334 ******** 2026-04-11 03:59:36.392587 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:59:36.392596 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:59:36.392609 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:59:36.392619 | orchestrator | 2026-04-11 03:59:36.392626 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 03:59:36.392635 | orchestrator | Saturday 11 April 2026 03:59:30 +0000 (0:00:00.358) 0:00:00.693 ******** 2026-04-11 03:59:36.392643 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-11 03:59:36.392652 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-11 03:59:36.392661 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-11 03:59:36.392668 | orchestrator | 2026-04-11 03:59:36.392675 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-11 03:59:36.392683 | orchestrator | 2026-04-11 03:59:36.392692 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-11 03:59:36.392699 | orchestrator | Saturday 11 April 2026 03:59:31 +0000 (0:00:00.523) 0:00:01.216 ******** 2026-04-11 03:59:36.392708 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:59:36.392717 | orchestrator | 2026-04-11 03:59:36.392794 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-11 03:59:36.392806 | orchestrator | Saturday 11 April 2026 03:59:31 +0000 (0:00:00.602) 0:00:01.818 ******** 2026-04-11 03:59:36.392819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:36.392831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:36.392840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:36.392848 | orchestrator | 2026-04-11 03:59:36.392857 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-11 03:59:36.392865 | orchestrator | Saturday 11 April 2026 03:59:32 +0000 (0:00:00.903) 0:00:02.721 ******** 2026-04-11 03:59:36.392874 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-11 03:59:36.392894 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-11 03:59:36.392904 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:59:36.392914 | orchestrator | 2026-04-11 03:59:36.392938 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-11 03:59:36.392948 | orchestrator | Saturday 11 April 2026 03:59:33 +0000 (0:00:00.936) 0:00:03.658 ******** 2026-04-11 03:59:36.392959 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 03:59:36.392968 | orchestrator | 2026-04-11 03:59:36.392978 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-11 03:59:36.392988 | orchestrator | Saturday 11 April 2026 03:59:34 +0000 (0:00:00.696) 0:00:04.355 ******** 2026-04-11 03:59:36.393016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:36.393027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:36.393038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:36.393047 | orchestrator | 2026-04-11 03:59:36.393057 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-11 03:59:36.393066 | orchestrator | Saturday 11 April 2026 03:59:35 +0000 (0:00:01.314) 0:00:05.670 ******** 2026-04-11 03:59:36.393075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 03:59:36.393084 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:59:36.393094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 03:59:36.393108 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:59:36.393129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 03:59:43.620298 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:59:43.620402 | orchestrator | 2026-04-11 03:59:43.620416 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-11 03:59:43.620427 | orchestrator | Saturday 11 April 2026 03:59:36 +0000 (0:00:00.699) 0:00:06.369 ******** 2026-04-11 03:59:43.620437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 03:59:43.620449 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:59:43.620458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 03:59:43.620467 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:59:43.620475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-11 03:59:43.620499 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:59:43.620556 | orchestrator | 2026-04-11 03:59:43.620574 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-11 03:59:43.620588 | orchestrator | Saturday 11 April 2026 03:59:37 +0000 (0:00:00.680) 0:00:07.050 ******** 2026-04-11 03:59:43.620603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:43.620634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:43.620670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:43.620686 | orchestrator | 2026-04-11 03:59:43.620701 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-11 03:59:43.620715 | orchestrator | Saturday 11 April 2026 03:59:38 +0000 (0:00:01.307) 0:00:08.357 ******** 2026-04-11 03:59:43.620772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:43.620782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:43.620791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 03:59:43.620808 | orchestrator | 2026-04-11 03:59:43.620816 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-11 03:59:43.620824 | orchestrator | Saturday 11 April 2026 03:59:40 +0000 (0:00:01.709) 0:00:10.066 ******** 2026-04-11 03:59:43.620832 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:59:43.620840 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:59:43.620849 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:59:43.620859 | orchestrator | 2026-04-11 03:59:43.620868 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-11 03:59:43.620877 | orchestrator | Saturday 11 April 2026 03:59:40 +0000 (0:00:00.377) 0:00:10.444 ******** 2026-04-11 03:59:43.620887 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 03:59:43.620897 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 03:59:43.620907 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 03:59:43.620920 | orchestrator | 2026-04-11 03:59:43.620929 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-11 03:59:43.620937 | orchestrator | Saturday 11 April 2026 03:59:41 +0000 (0:00:01.301) 0:00:11.746 ******** 2026-04-11 03:59:43.620945 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 03:59:43.620954 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 03:59:43.620962 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 03:59:43.620970 | orchestrator | 2026-04-11 03:59:43.620978 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-11 03:59:43.620992 | orchestrator | Saturday 11 April 2026 03:59:43 +0000 (0:00:01.848) 0:00:13.595 ******** 2026-04-11 03:59:50.354792 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 03:59:50.354863 | orchestrator | 2026-04-11 03:59:50.354870 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-11 03:59:50.354876 | orchestrator | Saturday 11 April 2026 03:59:44 +0000 (0:00:00.868) 0:00:14.464 ******** 2026-04-11 03:59:50.354881 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-11 03:59:50.354886 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-11 03:59:50.354891 | orchestrator | ok: [testbed-node-0] 2026-04-11 03:59:50.354896 | orchestrator | ok: [testbed-node-1] 2026-04-11 03:59:50.354900 | orchestrator | ok: [testbed-node-2] 2026-04-11 03:59:50.354904 | orchestrator | 2026-04-11 03:59:50.354909 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-11 03:59:50.354913 | orchestrator | Saturday 11 April 2026 03:59:45 +0000 (0:00:00.771) 0:00:15.236 ******** 2026-04-11 03:59:50.354917 | orchestrator | skipping: [testbed-node-0] 2026-04-11 03:59:50.354922 | orchestrator | skipping: [testbed-node-1] 2026-04-11 03:59:50.354926 | orchestrator | skipping: [testbed-node-2] 2026-04-11 03:59:50.354930 | orchestrator | 2026-04-11 03:59:50.354934 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-11 03:59:50.354938 | orchestrator | Saturday 11 April 2026 03:59:45 +0000 (0:00:00.397) 0:00:15.633 ******** 2026-04-11 03:59:50.354962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1102993, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9981904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.354971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1102993, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9981904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.354976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1102993, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9981904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.354990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103047, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0146215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.355005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103047, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0146215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.355011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103047, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0146215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.355020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103004, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0021906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.355024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103004, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0021906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.355029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103004, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0021906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.355033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103049, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.018138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.355040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103049, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.018138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:50.355048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103049, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.018138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103018, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0072777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103018, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0072777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103018, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0072777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103034, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0121706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103034, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0121706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103034, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0121706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102990, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9951904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102990, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9951904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1102990, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9951904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102998, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9991903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102998, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9991903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1102998, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872454.9991903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:54.111386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103006, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0021906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103006, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0021906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103006, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0021906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103024, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.009203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103024, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.009203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103024, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.009203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103040, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0131907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103040, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0131907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103040, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0131907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103001, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0011904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103001, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0011904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103001, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0011904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103032, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0111907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 03:59:57.974987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103032, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0111907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.175875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103032, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0111907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.175950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103021, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0085716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.175959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103021, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0085716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.175965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103021, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0085716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.175983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103015, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0061905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.176010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103015, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0061905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.176028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103015, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0061905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.176033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103011, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0047126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.176038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103011, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0047126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.176043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103011, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0047126, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.176051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103028, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0101907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.176061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103028, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0101907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:02.176070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103028, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0101907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103008, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0031905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103008, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0031905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103008, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0031905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103038, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0121906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103038, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0121906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103038, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0121906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103153, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0501914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103153, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0501914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103153, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0501914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103085, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0281909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103085, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0281909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103085, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0281909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:06.070751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103069, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0201907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.911935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103069, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0201907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103069, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0201907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103106, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0311909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103106, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0311909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103106, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0311909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103062, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0186207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103062, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0186207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103062, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0186207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103128, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.041191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103128, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.041191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103128, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.041191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103108, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0371912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:09.912397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103108, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0371912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103108, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0371912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103132, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0421913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103132, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0421913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103132, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0421913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103149, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0489213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103149, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0489213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103149, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0489213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103124, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0401912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103124, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0401912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103124, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0401912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103099, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.029191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103099, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.029191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:14.352310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103099, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.029191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103080, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.024191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103080, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.024191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103080, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.024191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103096, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.029191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103096, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.029191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103096, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.029191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103072, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0227807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103072, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0227807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103072, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0227807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103101, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0305977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103101, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0305977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103101, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0305977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:18.170952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103143, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0481913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103143, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0481913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103143, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0481913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103136, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0441911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103136, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0441911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103136, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0441911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103063, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0189257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103063, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0189257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103063, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0189257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103066, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0199919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103066, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0199919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103066, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0199919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103118, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.039049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:00:22.069853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103118, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.039049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:02:05.098155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103118, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.039049, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:02:05.098237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103134, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0427759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:02:05.098245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103134, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0427759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:02:05.098249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103134, 'dev': 118, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775872455.0427759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-11 04:02:05.098253 | orchestrator | 2026-04-11 04:02:05.098259 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-11 04:02:05.098264 | orchestrator | Saturday 11 April 2026 04:00:23 +0000 (0:00:37.651) 0:00:53.284 ******** 2026-04-11 04:02:05.098284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 04:02:05.098299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 04:02:05.098308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-11 04:02:05.098312 | orchestrator | 2026-04-11 04:02:05.098316 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-11 04:02:05.098320 | orchestrator | Saturday 11 April 2026 04:00:24 +0000 (0:00:01.019) 0:00:54.303 ******** 2026-04-11 04:02:05.098324 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:02:05.098328 | orchestrator | 2026-04-11 04:02:05.098332 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-11 04:02:05.098336 | orchestrator | Saturday 11 April 2026 04:00:26 +0000 (0:00:02.429) 0:00:56.733 ******** 2026-04-11 04:02:05.098340 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:02:05.098344 | orchestrator | 2026-04-11 04:02:05.098348 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-11 04:02:05.098352 | orchestrator | Saturday 11 April 2026 04:00:29 +0000 (0:00:02.338) 0:00:59.071 ******** 2026-04-11 04:02:05.098355 | orchestrator | 2026-04-11 04:02:05.098359 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-11 04:02:05.098363 | orchestrator | Saturday 11 April 2026 04:00:29 +0000 (0:00:00.074) 0:00:59.146 ******** 2026-04-11 04:02:05.098367 | orchestrator | 2026-04-11 04:02:05.098371 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-11 04:02:05.098375 | orchestrator | Saturday 11 April 2026 04:00:29 +0000 (0:00:00.091) 0:00:59.237 ******** 2026-04-11 04:02:05.098379 | orchestrator | 2026-04-11 04:02:05.098383 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-11 04:02:05.098386 | orchestrator | Saturday 11 April 2026 04:00:29 +0000 (0:00:00.075) 0:00:59.313 ******** 2026-04-11 04:02:05.098390 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:02:05.098394 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:02:05.098398 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:02:05.098402 | orchestrator | 2026-04-11 04:02:05.098405 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-11 04:02:05.098413 | orchestrator | Saturday 11 April 2026 04:00:31 +0000 (0:00:02.250) 0:01:01.563 ******** 2026-04-11 04:02:05.098417 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:02:05.098421 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:02:05.098425 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-11 04:02:05.098430 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-11 04:02:05.098435 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-04-11 04:02:05.098441 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-04-11 04:02:05.098447 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:02:05.098454 | orchestrator | 2026-04-11 04:02:05.098461 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-11 04:02:05.098471 | orchestrator | Saturday 11 April 2026 04:01:22 +0000 (0:00:50.879) 0:01:52.443 ******** 2026-04-11 04:02:05.098477 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:02:05.098483 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:02:05.098488 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:02:05.098495 | orchestrator | 2026-04-11 04:02:05.098501 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-11 04:02:05.098507 | orchestrator | Saturday 11 April 2026 04:01:59 +0000 (0:00:37.237) 0:02:29.681 ******** 2026-04-11 04:02:05.098512 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:02:05.098518 | orchestrator | 2026-04-11 04:02:05.098524 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-11 04:02:05.098530 | orchestrator | Saturday 11 April 2026 04:02:02 +0000 (0:00:02.354) 0:02:32.035 ******** 2026-04-11 04:02:05.098535 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:02:05.098540 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:02:05.098545 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:02:05.098551 | orchestrator | 2026-04-11 04:02:05.098557 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-11 04:02:05.098563 | orchestrator | Saturday 11 April 2026 04:02:02 +0000 (0:00:00.343) 0:02:32.379 ******** 2026-04-11 04:02:05.098570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-11 04:02:05.098584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-11 04:02:05.860114 | orchestrator | 2026-04-11 04:02:05.860195 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-11 04:02:05.860206 | orchestrator | Saturday 11 April 2026 04:02:05 +0000 (0:00:02.694) 0:02:35.074 ******** 2026-04-11 04:02:05.860213 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:02:05.860221 | orchestrator | 2026-04-11 04:02:05.860228 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:02:05.860235 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 04:02:05.860259 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 04:02:05.860266 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 04:02:05.860301 | orchestrator | 2026-04-11 04:02:05.860308 | orchestrator | 2026-04-11 04:02:05.860315 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:02:05.860329 | orchestrator | Saturday 11 April 2026 04:02:05 +0000 (0:00:00.341) 0:02:35.415 ******** 2026-04-11 04:02:05.860335 | orchestrator | =============================================================================== 2026-04-11 04:02:05.860342 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.88s 2026-04-11 04:02:05.860348 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.65s 2026-04-11 04:02:05.860354 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 37.24s 2026-04-11 04:02:05.860360 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.70s 2026-04-11 04:02:05.860367 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.43s 2026-04-11 04:02:05.860373 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.35s 2026-04-11 04:02:05.860379 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2026-04-11 04:02:05.860386 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.25s 2026-04-11 04:02:05.860392 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.85s 2026-04-11 04:02:05.860398 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.71s 2026-04-11 04:02:05.860404 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.31s 2026-04-11 04:02:05.860410 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.31s 2026-04-11 04:02:05.860417 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2026-04-11 04:02:05.860423 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2026-04-11 04:02:05.860429 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.94s 2026-04-11 04:02:05.860435 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.90s 2026-04-11 04:02:05.860442 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.87s 2026-04-11 04:02:05.860448 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.77s 2026-04-11 04:02:05.860454 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.70s 2026-04-11 04:02:05.860461 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.70s 2026-04-11 04:02:06.281602 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-04-11 04:02:06.292249 | orchestrator | + set -e 2026-04-11 04:02:06.292912 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 04:02:06.292942 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 04:02:06.292950 | orchestrator | ++ INTERACTIVE=false 2026-04-11 04:02:06.292957 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 04:02:06.292963 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 04:02:06.292969 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 04:02:06.292976 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 04:02:06.292982 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 04:02:06.292989 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 04:02:06.292995 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 04:02:06.293002 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 04:02:06.293008 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 04:02:06.293015 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 04:02:06.293021 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 04:02:06.293027 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 04:02:06.293034 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 04:02:06.293040 | orchestrator | ++ export ARA=false 2026-04-11 04:02:06.293048 | orchestrator | ++ ARA=false 2026-04-11 04:02:06.293057 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 04:02:06.293065 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 04:02:06.293075 | orchestrator | ++ export TEMPEST=false 2026-04-11 04:02:06.293082 | orchestrator | ++ TEMPEST=false 2026-04-11 04:02:06.293088 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 04:02:06.293094 | orchestrator | ++ IS_ZUUL=true 2026-04-11 04:02:06.293122 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:02:06.293128 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:02:06.293134 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 04:02:06.293140 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 04:02:06.293146 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 04:02:06.293152 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 04:02:06.293158 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 04:02:06.293165 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 04:02:06.293170 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 04:02:06.293176 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 04:02:06.293759 | orchestrator | ++ semver 9.5.0 8.0.0 2026-04-11 04:02:06.360437 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 04:02:06.360544 | orchestrator | + osism apply clusterapi 2026-04-11 04:02:08.904355 | orchestrator | 2026-04-11 04:02:08 | INFO  | Task a6f2f3af-d26e-47e7-9630-66682e71fec3 (clusterapi) was prepared for execution. 2026-04-11 04:02:08.904441 | orchestrator | 2026-04-11 04:02:08 | INFO  | It takes a moment until task a6f2f3af-d26e-47e7-9630-66682e71fec3 (clusterapi) has been started and output is visible here. 2026-04-11 04:03:26.620836 | orchestrator | 2026-04-11 04:03:26.620933 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-11 04:03:26.620949 | orchestrator | 2026-04-11 04:03:26.620960 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-11 04:03:26.620972 | orchestrator | Saturday 11 April 2026 04:02:14 +0000 (0:00:00.217) 0:00:00.217 ******** 2026-04-11 04:03:26.620984 | orchestrator | included: cert_manager for testbed-manager 2026-04-11 04:03:26.620992 | orchestrator | 2026-04-11 04:03:26.620998 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-11 04:03:26.621004 | orchestrator | Saturday 11 April 2026 04:02:14 +0000 (0:00:00.301) 0:00:00.519 ******** 2026-04-11 04:03:26.621010 | orchestrator | changed: [testbed-manager] 2026-04-11 04:03:26.621017 | orchestrator | 2026-04-11 04:03:26.621023 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-11 04:03:26.621043 | orchestrator | Saturday 11 April 2026 04:02:19 +0000 (0:00:05.671) 0:00:06.190 ******** 2026-04-11 04:03:26.621050 | orchestrator | changed: [testbed-manager] 2026-04-11 04:03:26.621055 | orchestrator | 2026-04-11 04:03:26.621061 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-11 04:03:26.621067 | orchestrator | 2026-04-11 04:03:26.621073 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-11 04:03:26.621079 | orchestrator | Saturday 11 April 2026 04:03:03 +0000 (0:00:43.306) 0:00:49.497 ******** 2026-04-11 04:03:26.621085 | orchestrator | ok: [testbed-manager] 2026-04-11 04:03:26.621091 | orchestrator | 2026-04-11 04:03:26.621097 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-11 04:03:26.621103 | orchestrator | Saturday 11 April 2026 04:03:04 +0000 (0:00:01.220) 0:00:50.718 ******** 2026-04-11 04:03:26.621109 | orchestrator | ok: [testbed-manager] 2026-04-11 04:03:26.621115 | orchestrator | 2026-04-11 04:03:26.621121 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-11 04:03:26.621127 | orchestrator | Saturday 11 April 2026 04:03:04 +0000 (0:00:00.167) 0:00:50.885 ******** 2026-04-11 04:03:26.621133 | orchestrator | ok: [testbed-manager] 2026-04-11 04:03:26.621139 | orchestrator | 2026-04-11 04:03:26.621145 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-11 04:03:26.621151 | orchestrator | Saturday 11 April 2026 04:03:23 +0000 (0:00:18.852) 0:01:09.737 ******** 2026-04-11 04:03:26.621157 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:03:26.621163 | orchestrator | 2026-04-11 04:03:26.621169 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-11 04:03:26.621175 | orchestrator | Saturday 11 April 2026 04:03:23 +0000 (0:00:00.160) 0:01:09.898 ******** 2026-04-11 04:03:26.621181 | orchestrator | changed: [testbed-manager] 2026-04-11 04:03:26.621187 | orchestrator | 2026-04-11 04:03:26.621193 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:03:26.621218 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 04:03:26.621225 | orchestrator | 2026-04-11 04:03:26.621231 | orchestrator | 2026-04-11 04:03:26.621236 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:03:26.621242 | orchestrator | Saturday 11 April 2026 04:03:26 +0000 (0:00:02.501) 0:01:12.399 ******** 2026-04-11 04:03:26.621248 | orchestrator | =============================================================================== 2026-04-11 04:03:26.621254 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 43.31s 2026-04-11 04:03:26.621260 | orchestrator | Initialize the CAPI management cluster --------------------------------- 18.85s 2026-04-11 04:03:26.621265 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.67s 2026-04-11 04:03:26.621271 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.50s 2026-04-11 04:03:26.621277 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.22s 2026-04-11 04:03:26.621283 | orchestrator | Include cert_manager role ----------------------------------------------- 0.30s 2026-04-11 04:03:26.621288 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.17s 2026-04-11 04:03:26.621294 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.16s 2026-04-11 04:03:27.028053 | orchestrator | + osism apply magnum 2026-04-11 04:03:29.364606 | orchestrator | 2026-04-11 04:03:29 | INFO  | Task a9f1c556-e564-4f27-9a51-fc56a6819f5d (magnum) was prepared for execution. 2026-04-11 04:03:29.364733 | orchestrator | 2026-04-11 04:03:29 | INFO  | It takes a moment until task a9f1c556-e564-4f27-9a51-fc56a6819f5d (magnum) has been started and output is visible here. 2026-04-11 04:04:13.832566 | orchestrator | 2026-04-11 04:04:13.832708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 04:04:13.832721 | orchestrator | 2026-04-11 04:04:13.832729 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 04:04:13.832736 | orchestrator | Saturday 11 April 2026 04:03:34 +0000 (0:00:00.311) 0:00:00.311 ******** 2026-04-11 04:04:13.832743 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:04:13.832752 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:04:13.832759 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:04:13.832765 | orchestrator | 2026-04-11 04:04:13.832772 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 04:04:13.832779 | orchestrator | Saturday 11 April 2026 04:03:34 +0000 (0:00:00.384) 0:00:00.695 ******** 2026-04-11 04:04:13.832786 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-11 04:04:13.832793 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-11 04:04:13.832800 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-11 04:04:13.832807 | orchestrator | 2026-04-11 04:04:13.832814 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-11 04:04:13.832820 | orchestrator | 2026-04-11 04:04:13.832827 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-11 04:04:13.832834 | orchestrator | Saturday 11 April 2026 04:03:35 +0000 (0:00:00.511) 0:00:01.207 ******** 2026-04-11 04:04:13.832841 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:04:13.832849 | orchestrator | 2026-04-11 04:04:13.832855 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-11 04:04:13.832862 | orchestrator | Saturday 11 April 2026 04:03:35 +0000 (0:00:00.654) 0:00:01.862 ******** 2026-04-11 04:04:13.832869 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-11 04:04:13.832876 | orchestrator | 2026-04-11 04:04:13.832883 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-11 04:04:13.832890 | orchestrator | Saturday 11 April 2026 04:03:39 +0000 (0:00:03.709) 0:00:05.571 ******** 2026-04-11 04:04:13.832927 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-11 04:04:13.832935 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-11 04:04:13.832942 | orchestrator | 2026-04-11 04:04:13.832948 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-11 04:04:13.832955 | orchestrator | Saturday 11 April 2026 04:03:46 +0000 (0:00:06.872) 0:00:12.444 ******** 2026-04-11 04:04:13.832962 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 04:04:13.832969 | orchestrator | 2026-04-11 04:04:13.832975 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-11 04:04:13.832982 | orchestrator | Saturday 11 April 2026 04:03:49 +0000 (0:00:03.494) 0:00:15.939 ******** 2026-04-11 04:04:13.832989 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 04:04:13.832996 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-11 04:04:13.833003 | orchestrator | 2026-04-11 04:04:13.833010 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-11 04:04:13.833016 | orchestrator | Saturday 11 April 2026 04:03:53 +0000 (0:00:03.979) 0:00:19.919 ******** 2026-04-11 04:04:13.833023 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 04:04:13.833030 | orchestrator | 2026-04-11 04:04:13.833036 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-11 04:04:13.833043 | orchestrator | Saturday 11 April 2026 04:03:57 +0000 (0:00:03.323) 0:00:23.243 ******** 2026-04-11 04:04:13.833051 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-11 04:04:13.833063 | orchestrator | 2026-04-11 04:04:13.833079 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-11 04:04:13.833092 | orchestrator | Saturday 11 April 2026 04:04:01 +0000 (0:00:03.948) 0:00:27.192 ******** 2026-04-11 04:04:13.833102 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:04:13.833113 | orchestrator | 2026-04-11 04:04:13.833124 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-11 04:04:13.833135 | orchestrator | Saturday 11 April 2026 04:04:04 +0000 (0:00:03.333) 0:00:30.526 ******** 2026-04-11 04:04:13.833146 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:04:13.833159 | orchestrator | 2026-04-11 04:04:13.833171 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-11 04:04:13.833182 | orchestrator | Saturday 11 April 2026 04:04:08 +0000 (0:00:04.223) 0:00:34.749 ******** 2026-04-11 04:04:13.833194 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:04:13.833204 | orchestrator | 2026-04-11 04:04:13.833217 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-11 04:04:13.833228 | orchestrator | Saturday 11 April 2026 04:04:12 +0000 (0:00:03.552) 0:00:38.301 ******** 2026-04-11 04:04:13.833265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:13.833286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:13.833319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:13.833332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:13.833346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:13.833366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:21.919465 | orchestrator | 2026-04-11 04:04:21.919571 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-11 04:04:21.919643 | orchestrator | Saturday 11 April 2026 04:04:13 +0000 (0:00:01.674) 0:00:39.976 ******** 2026-04-11 04:04:21.919655 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:04:21.919664 | orchestrator | 2026-04-11 04:04:21.919672 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-11 04:04:21.919681 | orchestrator | Saturday 11 April 2026 04:04:13 +0000 (0:00:00.151) 0:00:40.128 ******** 2026-04-11 04:04:21.919688 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:04:21.919696 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:04:21.919704 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:04:21.919712 | orchestrator | 2026-04-11 04:04:21.919720 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-11 04:04:21.919728 | orchestrator | Saturday 11 April 2026 04:04:14 +0000 (0:00:00.344) 0:00:40.472 ******** 2026-04-11 04:04:21.919736 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 04:04:21.919744 | orchestrator | 2026-04-11 04:04:21.919752 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-11 04:04:21.919760 | orchestrator | Saturday 11 April 2026 04:04:15 +0000 (0:00:01.004) 0:00:41.476 ******** 2026-04-11 04:04:21.919784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:21.919798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:21.919806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:21.919832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:21.919850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:21.919862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:21.919870 | orchestrator | 2026-04-11 04:04:21.919881 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-11 04:04:21.919896 | orchestrator | Saturday 11 April 2026 04:04:17 +0000 (0:00:02.552) 0:00:44.029 ******** 2026-04-11 04:04:21.919917 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:04:21.919934 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:04:21.919949 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:04:21.919963 | orchestrator | 2026-04-11 04:04:21.919977 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-11 04:04:21.919990 | orchestrator | Saturday 11 April 2026 04:04:18 +0000 (0:00:00.568) 0:00:44.598 ******** 2026-04-11 04:04:21.920005 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:04:21.920019 | orchestrator | 2026-04-11 04:04:21.920033 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-11 04:04:21.920047 | orchestrator | Saturday 11 April 2026 04:04:19 +0000 (0:00:00.676) 0:00:45.275 ******** 2026-04-11 04:04:21.920063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:21.920098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:22.898386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:22.898526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:22.898566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:22.898587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:22.898702 | orchestrator | 2026-04-11 04:04:22.898718 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-11 04:04:22.898730 | orchestrator | Saturday 11 April 2026 04:04:21 +0000 (0:00:02.799) 0:00:48.074 ******** 2026-04-11 04:04:22.898759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:22.898788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:22.898799 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:04:22.898818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:22.898829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:22.898846 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:04:22.898857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:22.898884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:26.529084 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:04:26.529174 | orchestrator | 2026-04-11 04:04:26.529186 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-11 04:04:26.529195 | orchestrator | Saturday 11 April 2026 04:04:22 +0000 (0:00:00.973) 0:00:49.048 ******** 2026-04-11 04:04:26.529220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:26.529231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:26.529240 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:04:26.529247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:26.529288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:26.529296 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:04:26.529319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:26.529332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:26.529338 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:04:26.529346 | orchestrator | 2026-04-11 04:04:26.529352 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-11 04:04:26.529358 | orchestrator | Saturday 11 April 2026 04:04:23 +0000 (0:00:00.965) 0:00:50.013 ******** 2026-04-11 04:04:26.529366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:26.529380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:26.529392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:33.134543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:33.134720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:33.134735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:33.134763 | orchestrator | 2026-04-11 04:04:33.134773 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-11 04:04:33.134782 | orchestrator | Saturday 11 April 2026 04:04:26 +0000 (0:00:02.668) 0:00:52.682 ******** 2026-04-11 04:04:33.134790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:33.134815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:33.134827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:33.134836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:33.134849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:33.134857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:33.134864 | orchestrator | 2026-04-11 04:04:33.134872 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-11 04:04:33.134880 | orchestrator | Saturday 11 April 2026 04:04:32 +0000 (0:00:05.849) 0:00:58.531 ******** 2026-04-11 04:04:33.134894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:35.087535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:35.087740 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:04:35.087765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:35.087809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:35.087824 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:04:35.087838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-11 04:04:35.087875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:04:35.087889 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:04:35.087903 | orchestrator | 2026-04-11 04:04:35.087917 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-11 04:04:35.087932 | orchestrator | Saturday 11 April 2026 04:04:33 +0000 (0:00:00.759) 0:00:59.291 ******** 2026-04-11 04:04:35.087958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:35.087986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:35.088003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-11 04:04:35.088018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:04:35.088043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:05:25.192807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 04:05:25.192900 | orchestrator | 2026-04-11 04:05:25.192913 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-11 04:05:25.192922 | orchestrator | Saturday 11 April 2026 04:04:35 +0000 (0:00:01.948) 0:01:01.240 ******** 2026-04-11 04:05:25.192930 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:05:25.192937 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:05:25.192944 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:05:25.192951 | orchestrator | 2026-04-11 04:05:25.192958 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-11 04:05:25.192965 | orchestrator | Saturday 11 April 2026 04:04:35 +0000 (0:00:00.588) 0:01:01.828 ******** 2026-04-11 04:05:25.192972 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:05:25.192979 | orchestrator | 2026-04-11 04:05:25.192985 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-11 04:05:25.192991 | orchestrator | Saturday 11 April 2026 04:04:37 +0000 (0:00:02.213) 0:01:04.041 ******** 2026-04-11 04:05:25.192997 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:05:25.193003 | orchestrator | 2026-04-11 04:05:25.193010 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-11 04:05:25.193017 | orchestrator | Saturday 11 April 2026 04:04:40 +0000 (0:00:02.361) 0:01:06.403 ******** 2026-04-11 04:05:25.193024 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:05:25.193030 | orchestrator | 2026-04-11 04:05:25.193037 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-11 04:05:25.193043 | orchestrator | Saturday 11 April 2026 04:04:57 +0000 (0:00:17.359) 0:01:23.762 ******** 2026-04-11 04:05:25.193050 | orchestrator | 2026-04-11 04:05:25.193056 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-11 04:05:25.193064 | orchestrator | Saturday 11 April 2026 04:04:57 +0000 (0:00:00.102) 0:01:23.865 ******** 2026-04-11 04:05:25.193070 | orchestrator | 2026-04-11 04:05:25.193077 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-11 04:05:25.193084 | orchestrator | Saturday 11 April 2026 04:04:57 +0000 (0:00:00.087) 0:01:23.953 ******** 2026-04-11 04:05:25.193090 | orchestrator | 2026-04-11 04:05:25.193097 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-11 04:05:25.193104 | orchestrator | Saturday 11 April 2026 04:04:57 +0000 (0:00:00.076) 0:01:24.029 ******** 2026-04-11 04:05:25.193111 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:05:25.193118 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:05:25.193125 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:05:25.193131 | orchestrator | 2026-04-11 04:05:25.193138 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-11 04:05:25.193145 | orchestrator | Saturday 11 April 2026 04:05:13 +0000 (0:00:15.435) 0:01:39.465 ******** 2026-04-11 04:05:25.193152 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:05:25.193159 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:05:25.193165 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:05:25.193172 | orchestrator | 2026-04-11 04:05:25.193179 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:05:25.193187 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:05:25.193216 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 04:05:25.193223 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 04:05:25.193230 | orchestrator | 2026-04-11 04:05:25.193237 | orchestrator | 2026-04-11 04:05:25.193244 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:05:25.193250 | orchestrator | Saturday 11 April 2026 04:05:24 +0000 (0:00:11.431) 0:01:50.896 ******** 2026-04-11 04:05:25.193257 | orchestrator | =============================================================================== 2026-04-11 04:05:25.193264 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.36s 2026-04-11 04:05:25.193271 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.44s 2026-04-11 04:05:25.193277 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.43s 2026-04-11 04:05:25.193284 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.87s 2026-04-11 04:05:25.193291 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.85s 2026-04-11 04:05:25.193298 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.22s 2026-04-11 04:05:25.193304 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.98s 2026-04-11 04:05:25.193327 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.95s 2026-04-11 04:05:25.193335 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.71s 2026-04-11 04:05:25.193349 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.55s 2026-04-11 04:05:25.193357 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.49s 2026-04-11 04:05:25.193364 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.33s 2026-04-11 04:05:25.193371 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.32s 2026-04-11 04:05:25.193379 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.80s 2026-04-11 04:05:25.193387 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.67s 2026-04-11 04:05:25.193394 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.55s 2026-04-11 04:05:25.193400 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.36s 2026-04-11 04:05:25.193407 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.21s 2026-04-11 04:05:25.193413 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.95s 2026-04-11 04:05:25.193420 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.67s 2026-04-11 04:05:25.965085 | orchestrator | ok: Runtime: 1:47:23.791734 2026-04-11 04:05:26.215126 | 2026-04-11 04:05:26.215287 | TASK [Deploy in a nutshell] 2026-04-11 04:05:26.752124 | orchestrator | skipping: Conditional result was False 2026-04-11 04:05:26.775545 | 2026-04-11 04:05:26.775704 | TASK [Bootstrap services] 2026-04-11 04:05:27.497550 | orchestrator | 2026-04-11 04:05:27.497708 | orchestrator | # BOOTSTRAP 2026-04-11 04:05:27.497723 | orchestrator | 2026-04-11 04:05:27.497732 | orchestrator | + set -e 2026-04-11 04:05:27.497739 | orchestrator | + echo 2026-04-11 04:05:27.497747 | orchestrator | + echo '# BOOTSTRAP' 2026-04-11 04:05:27.497758 | orchestrator | + echo 2026-04-11 04:05:27.497785 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-11 04:05:27.503993 | orchestrator | + set -e 2026-04-11 04:05:27.504066 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-11 04:05:30.072783 | orchestrator | 2026-04-11 04:05:30 | INFO  | It takes a moment until task 242e90f5-76b2-4d60-bddc-efa3f1b79a75 (flavor-manager) has been started and output is visible here. 2026-04-11 04:05:38.259427 | orchestrator | 2026-04-11 04:05:33 | INFO  | Flavor SCS-1L-1 created 2026-04-11 04:05:38.259613 | orchestrator | 2026-04-11 04:05:33 | INFO  | Flavor SCS-1L-1-5 created 2026-04-11 04:05:38.259646 | orchestrator | 2026-04-11 04:05:33 | INFO  | Flavor SCS-1V-2 created 2026-04-11 04:05:38.259665 | orchestrator | 2026-04-11 04:05:34 | INFO  | Flavor SCS-1V-2-5 created 2026-04-11 04:05:38.259686 | orchestrator | 2026-04-11 04:05:34 | INFO  | Flavor SCS-1V-4 created 2026-04-11 04:05:38.259705 | orchestrator | 2026-04-11 04:05:34 | INFO  | Flavor SCS-1V-4-10 created 2026-04-11 04:05:38.259724 | orchestrator | 2026-04-11 04:05:34 | INFO  | Flavor SCS-1V-8 created 2026-04-11 04:05:38.259745 | orchestrator | 2026-04-11 04:05:34 | INFO  | Flavor SCS-1V-8-20 created 2026-04-11 04:05:38.259785 | orchestrator | 2026-04-11 04:05:34 | INFO  | Flavor SCS-2V-4 created 2026-04-11 04:05:38.259804 | orchestrator | 2026-04-11 04:05:34 | INFO  | Flavor SCS-2V-4-10 created 2026-04-11 04:05:38.259823 | orchestrator | 2026-04-11 04:05:35 | INFO  | Flavor SCS-2V-8 created 2026-04-11 04:05:38.259843 | orchestrator | 2026-04-11 04:05:35 | INFO  | Flavor SCS-2V-8-20 created 2026-04-11 04:05:38.259861 | orchestrator | 2026-04-11 04:05:35 | INFO  | Flavor SCS-2V-16 created 2026-04-11 04:05:38.259879 | orchestrator | 2026-04-11 04:05:35 | INFO  | Flavor SCS-2V-16-50 created 2026-04-11 04:05:38.259909 | orchestrator | 2026-04-11 04:05:35 | INFO  | Flavor SCS-4V-8 created 2026-04-11 04:05:38.259928 | orchestrator | 2026-04-11 04:05:35 | INFO  | Flavor SCS-4V-8-20 created 2026-04-11 04:05:38.259947 | orchestrator | 2026-04-11 04:05:35 | INFO  | Flavor SCS-4V-16 created 2026-04-11 04:05:38.259965 | orchestrator | 2026-04-11 04:05:36 | INFO  | Flavor SCS-4V-16-50 created 2026-04-11 04:05:38.259983 | orchestrator | 2026-04-11 04:05:36 | INFO  | Flavor SCS-4V-32 created 2026-04-11 04:05:38.260001 | orchestrator | 2026-04-11 04:05:36 | INFO  | Flavor SCS-4V-32-100 created 2026-04-11 04:05:38.260020 | orchestrator | 2026-04-11 04:05:36 | INFO  | Flavor SCS-8V-16 created 2026-04-11 04:05:38.260040 | orchestrator | 2026-04-11 04:05:36 | INFO  | Flavor SCS-8V-16-50 created 2026-04-11 04:05:38.260059 | orchestrator | 2026-04-11 04:05:36 | INFO  | Flavor SCS-8V-32 created 2026-04-11 04:05:38.260077 | orchestrator | 2026-04-11 04:05:36 | INFO  | Flavor SCS-8V-32-100 created 2026-04-11 04:05:38.260096 | orchestrator | 2026-04-11 04:05:37 | INFO  | Flavor SCS-16V-32 created 2026-04-11 04:05:38.260115 | orchestrator | 2026-04-11 04:05:37 | INFO  | Flavor SCS-16V-32-100 created 2026-04-11 04:05:38.260133 | orchestrator | 2026-04-11 04:05:37 | INFO  | Flavor SCS-2V-4-20s created 2026-04-11 04:05:38.260160 | orchestrator | 2026-04-11 04:05:37 | INFO  | Flavor SCS-4V-8-50s created 2026-04-11 04:05:38.260181 | orchestrator | 2026-04-11 04:05:37 | INFO  | Flavor SCS-8V-32-100s created 2026-04-11 04:05:40.912980 | orchestrator | 2026-04-11 04:05:40 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-11 04:05:51.036349 | orchestrator | 2026-04-11 04:05:51 | INFO  | Task 40351030-beed-42ab-a3fa-10938f7396cc (bootstrap-basic) was prepared for execution. 2026-04-11 04:05:51.036464 | orchestrator | 2026-04-11 04:05:51 | INFO  | It takes a moment until task 40351030-beed-42ab-a3fa-10938f7396cc (bootstrap-basic) has been started and output is visible here. 2026-04-11 04:06:39.447838 | orchestrator | 2026-04-11 04:06:39.447943 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-11 04:06:39.447955 | orchestrator | 2026-04-11 04:06:39.447963 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 04:06:39.447972 | orchestrator | Saturday 11 April 2026 04:05:56 +0000 (0:00:00.101) 0:00:00.101 ******** 2026-04-11 04:06:39.447980 | orchestrator | ok: [localhost] 2026-04-11 04:06:39.447989 | orchestrator | 2026-04-11 04:06:39.447997 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-11 04:06:39.448005 | orchestrator | Saturday 11 April 2026 04:05:58 +0000 (0:00:02.035) 0:00:02.136 ******** 2026-04-11 04:06:39.448013 | orchestrator | ok: [localhost] 2026-04-11 04:06:39.448020 | orchestrator | 2026-04-11 04:06:39.448028 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-11 04:06:39.448040 | orchestrator | Saturday 11 April 2026 04:06:06 +0000 (0:00:08.040) 0:00:10.177 ******** 2026-04-11 04:06:39.448052 | orchestrator | changed: [localhost] 2026-04-11 04:06:39.448068 | orchestrator | 2026-04-11 04:06:39.448084 | orchestrator | TASK [Create public network] *************************************************** 2026-04-11 04:06:39.448097 | orchestrator | Saturday 11 April 2026 04:06:13 +0000 (0:00:06.968) 0:00:17.146 ******** 2026-04-11 04:06:39.448109 | orchestrator | changed: [localhost] 2026-04-11 04:06:39.448120 | orchestrator | 2026-04-11 04:06:39.448131 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-11 04:06:39.448142 | orchestrator | Saturday 11 April 2026 04:06:19 +0000 (0:00:05.759) 0:00:22.905 ******** 2026-04-11 04:06:39.448159 | orchestrator | changed: [localhost] 2026-04-11 04:06:39.448171 | orchestrator | 2026-04-11 04:06:39.448184 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-11 04:06:39.448240 | orchestrator | Saturday 11 April 2026 04:06:25 +0000 (0:00:06.820) 0:00:29.726 ******** 2026-04-11 04:06:39.448249 | orchestrator | changed: [localhost] 2026-04-11 04:06:39.448257 | orchestrator | 2026-04-11 04:06:39.448265 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-11 04:06:39.448272 | orchestrator | Saturday 11 April 2026 04:06:30 +0000 (0:00:04.804) 0:00:34.530 ******** 2026-04-11 04:06:39.448284 | orchestrator | changed: [localhost] 2026-04-11 04:06:39.448296 | orchestrator | 2026-04-11 04:06:39.448315 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-11 04:06:39.448341 | orchestrator | Saturday 11 April 2026 04:06:34 +0000 (0:00:04.307) 0:00:38.837 ******** 2026-04-11 04:06:39.448354 | orchestrator | ok: [localhost] 2026-04-11 04:06:39.448366 | orchestrator | 2026-04-11 04:06:39.448378 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:06:39.448390 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 04:06:39.448403 | orchestrator | 2026-04-11 04:06:39.448416 | orchestrator | 2026-04-11 04:06:39.448429 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:06:39.448442 | orchestrator | Saturday 11 April 2026 04:06:39 +0000 (0:00:04.132) 0:00:42.970 ******** 2026-04-11 04:06:39.448455 | orchestrator | =============================================================================== 2026-04-11 04:06:39.448469 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.04s 2026-04-11 04:06:39.448482 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.97s 2026-04-11 04:06:39.448497 | orchestrator | Set public network to default ------------------------------------------- 6.82s 2026-04-11 04:06:39.448507 | orchestrator | Create public network --------------------------------------------------- 5.76s 2026-04-11 04:06:39.448578 | orchestrator | Create public subnet ---------------------------------------------------- 4.80s 2026-04-11 04:06:39.448588 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.31s 2026-04-11 04:06:39.448597 | orchestrator | Create manager role ----------------------------------------------------- 4.13s 2026-04-11 04:06:39.448605 | orchestrator | Gathering Facts --------------------------------------------------------- 2.04s 2026-04-11 04:06:42.171788 | orchestrator | 2026-04-11 04:06:42 | INFO  | It takes a moment until task e11caa3e-bdb1-4ec0-b008-e49199bd82aa (image-manager) has been started and output is visible here. 2026-04-11 04:07:25.559092 | orchestrator | 2026-04-11 04:06:45 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-11 04:07:25.559209 | orchestrator | 2026-04-11 04:06:45 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-11 04:07:25.559232 | orchestrator | 2026-04-11 04:06:45 | INFO  | Importing image Cirros 0.6.2 2026-04-11 04:07:25.559250 | orchestrator | 2026-04-11 04:06:45 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-11 04:07:25.559268 | orchestrator | 2026-04-11 04:06:47 | INFO  | Waiting for image to leave queued state... 2026-04-11 04:07:25.559285 | orchestrator | 2026-04-11 04:06:49 | INFO  | Waiting for import to complete... 2026-04-11 04:07:25.559301 | orchestrator | 2026-04-11 04:06:59 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-11 04:07:25.559320 | orchestrator | 2026-04-11 04:07:00 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-11 04:07:25.559339 | orchestrator | 2026-04-11 04:07:00 | INFO  | Setting internal_version = 0.6.2 2026-04-11 04:07:25.559357 | orchestrator | 2026-04-11 04:07:00 | INFO  | Setting image_original_user = cirros 2026-04-11 04:07:25.559375 | orchestrator | 2026-04-11 04:07:00 | INFO  | Adding tag os:cirros 2026-04-11 04:07:25.559392 | orchestrator | 2026-04-11 04:07:00 | INFO  | Setting property architecture: x86_64 2026-04-11 04:07:25.559410 | orchestrator | 2026-04-11 04:07:00 | INFO  | Setting property hw_disk_bus: scsi 2026-04-11 04:07:25.559428 | orchestrator | 2026-04-11 04:07:00 | INFO  | Setting property hw_rng_model: virtio 2026-04-11 04:07:25.559445 | orchestrator | 2026-04-11 04:07:01 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-11 04:07:25.559459 | orchestrator | 2026-04-11 04:07:01 | INFO  | Setting property hw_watchdog_action: reset 2026-04-11 04:07:25.559469 | orchestrator | 2026-04-11 04:07:01 | INFO  | Setting property hypervisor_type: qemu 2026-04-11 04:07:25.559480 | orchestrator | 2026-04-11 04:07:02 | INFO  | Setting property os_distro: cirros 2026-04-11 04:07:25.559490 | orchestrator | 2026-04-11 04:07:02 | INFO  | Setting property os_purpose: minimal 2026-04-11 04:07:25.559500 | orchestrator | 2026-04-11 04:07:02 | INFO  | Setting property replace_frequency: never 2026-04-11 04:07:25.559510 | orchestrator | 2026-04-11 04:07:02 | INFO  | Setting property uuid_validity: none 2026-04-11 04:07:25.559520 | orchestrator | 2026-04-11 04:07:03 | INFO  | Setting property provided_until: none 2026-04-11 04:07:25.559568 | orchestrator | 2026-04-11 04:07:03 | INFO  | Setting property image_description: Cirros 2026-04-11 04:07:25.559597 | orchestrator | 2026-04-11 04:07:03 | INFO  | Setting property image_name: Cirros 2026-04-11 04:07:25.559613 | orchestrator | 2026-04-11 04:07:04 | INFO  | Setting property internal_version: 0.6.2 2026-04-11 04:07:25.559629 | orchestrator | 2026-04-11 04:07:04 | INFO  | Setting property image_original_user: cirros 2026-04-11 04:07:25.559682 | orchestrator | 2026-04-11 04:07:04 | INFO  | Setting property os_version: 0.6.2 2026-04-11 04:07:25.559712 | orchestrator | 2026-04-11 04:07:04 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-11 04:07:25.559733 | orchestrator | 2026-04-11 04:07:05 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-11 04:07:25.559750 | orchestrator | 2026-04-11 04:07:05 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-11 04:07:25.559767 | orchestrator | 2026-04-11 04:07:05 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-11 04:07:25.559784 | orchestrator | 2026-04-11 04:07:05 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-11 04:07:25.559795 | orchestrator | 2026-04-11 04:07:05 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-11 04:07:25.559811 | orchestrator | 2026-04-11 04:07:05 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-11 04:07:25.559823 | orchestrator | 2026-04-11 04:07:05 | INFO  | Importing image Cirros 0.6.3 2026-04-11 04:07:25.559835 | orchestrator | 2026-04-11 04:07:05 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-11 04:07:25.559847 | orchestrator | 2026-04-11 04:07:06 | INFO  | Waiting for image to leave queued state... 2026-04-11 04:07:25.559859 | orchestrator | 2026-04-11 04:07:08 | INFO  | Waiting for import to complete... 2026-04-11 04:07:25.559891 | orchestrator | 2026-04-11 04:07:18 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-11 04:07:25.559909 | orchestrator | 2026-04-11 04:07:19 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-11 04:07:25.559934 | orchestrator | 2026-04-11 04:07:19 | INFO  | Setting internal_version = 0.6.3 2026-04-11 04:07:25.559952 | orchestrator | 2026-04-11 04:07:19 | INFO  | Setting image_original_user = cirros 2026-04-11 04:07:25.559967 | orchestrator | 2026-04-11 04:07:19 | INFO  | Adding tag os:cirros 2026-04-11 04:07:25.559982 | orchestrator | 2026-04-11 04:07:19 | INFO  | Setting property architecture: x86_64 2026-04-11 04:07:25.559999 | orchestrator | 2026-04-11 04:07:19 | INFO  | Setting property hw_disk_bus: scsi 2026-04-11 04:07:25.560016 | orchestrator | 2026-04-11 04:07:20 | INFO  | Setting property hw_rng_model: virtio 2026-04-11 04:07:25.560034 | orchestrator | 2026-04-11 04:07:20 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-11 04:07:25.560051 | orchestrator | 2026-04-11 04:07:20 | INFO  | Setting property hw_watchdog_action: reset 2026-04-11 04:07:25.560067 | orchestrator | 2026-04-11 04:07:21 | INFO  | Setting property hypervisor_type: qemu 2026-04-11 04:07:25.560083 | orchestrator | 2026-04-11 04:07:21 | INFO  | Setting property os_distro: cirros 2026-04-11 04:07:25.560099 | orchestrator | 2026-04-11 04:07:21 | INFO  | Setting property os_purpose: minimal 2026-04-11 04:07:25.560115 | orchestrator | 2026-04-11 04:07:21 | INFO  | Setting property replace_frequency: never 2026-04-11 04:07:25.560132 | orchestrator | 2026-04-11 04:07:22 | INFO  | Setting property uuid_validity: none 2026-04-11 04:07:25.560147 | orchestrator | 2026-04-11 04:07:22 | INFO  | Setting property provided_until: none 2026-04-11 04:07:25.560163 | orchestrator | 2026-04-11 04:07:22 | INFO  | Setting property image_description: Cirros 2026-04-11 04:07:25.560179 | orchestrator | 2026-04-11 04:07:23 | INFO  | Setting property image_name: Cirros 2026-04-11 04:07:25.560194 | orchestrator | 2026-04-11 04:07:23 | INFO  | Setting property internal_version: 0.6.3 2026-04-11 04:07:25.560223 | orchestrator | 2026-04-11 04:07:23 | INFO  | Setting property image_original_user: cirros 2026-04-11 04:07:25.560238 | orchestrator | 2026-04-11 04:07:23 | INFO  | Setting property os_version: 0.6.3 2026-04-11 04:07:25.560253 | orchestrator | 2026-04-11 04:07:24 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-11 04:07:25.560268 | orchestrator | 2026-04-11 04:07:24 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-11 04:07:25.560283 | orchestrator | 2026-04-11 04:07:24 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-11 04:07:25.560299 | orchestrator | 2026-04-11 04:07:24 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-11 04:07:25.560314 | orchestrator | 2026-04-11 04:07:24 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-11 04:07:25.915764 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-11 04:07:28.358412 | orchestrator | 2026-04-11 04:07:28 | INFO  | date: 2026-04-11 2026-04-11 04:07:28.358602 | orchestrator | 2026-04-11 04:07:28 | INFO  | image: octavia-amphora-haproxy-2024.2.20260411.qcow2 2026-04-11 04:07:28.358641 | orchestrator | 2026-04-11 04:07:28 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260411.qcow2 2026-04-11 04:07:28.358653 | orchestrator | 2026-04-11 04:07:28 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260411.qcow2.CHECKSUM 2026-04-11 04:07:28.506427 | orchestrator | 2026-04-11 04:07:28 | INFO  | checksum: 7abf27fe1ff608c5a6760db8d6037946a96c79b011bdeb6963f3a225807488d4 2026-04-11 04:07:28.590473 | orchestrator | 2026-04-11 04:07:28 | INFO  | It takes a moment until task fc2c0c6f-fb99-4a4b-b2d6-ecf87b38ce67 (image-manager) has been started and output is visible here. 2026-04-11 04:08:41.929669 | orchestrator | 2026-04-11 04:07:31 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-11' 2026-04-11 04:08:41.929787 | orchestrator | 2026-04-11 04:07:31 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260411.qcow2: 200 2026-04-11 04:08:41.929803 | orchestrator | 2026-04-11 04:07:31 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-11 2026-04-11 04:08:41.929813 | orchestrator | 2026-04-11 04:07:31 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260411.qcow2 2026-04-11 04:08:41.929823 | orchestrator | 2026-04-11 04:07:32 | INFO  | Waiting for image to leave queued state... 2026-04-11 04:08:41.929844 | orchestrator | 2026-04-11 04:07:34 | INFO  | Waiting for import to complete... 2026-04-11 04:08:41.929888 | orchestrator | 2026-04-11 04:07:44 | INFO  | Waiting for import to complete... 2026-04-11 04:08:41.929898 | orchestrator | 2026-04-11 04:07:54 | INFO  | Waiting for import to complete... 2026-04-11 04:08:41.929907 | orchestrator | 2026-04-11 04:08:05 | INFO  | Waiting for import to complete... 2026-04-11 04:08:41.929928 | orchestrator | 2026-04-11 04:08:15 | INFO  | Waiting for import to complete... 2026-04-11 04:08:41.929938 | orchestrator | 2026-04-11 04:08:25 | INFO  | Waiting for import to complete... 2026-04-11 04:08:41.929947 | orchestrator | 2026-04-11 04:08:35 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-11' successfully completed, reloading images 2026-04-11 04:08:41.929956 | orchestrator | 2026-04-11 04:08:36 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-11' 2026-04-11 04:08:41.929987 | orchestrator | 2026-04-11 04:08:36 | INFO  | Setting internal_version = 2026-04-11 2026-04-11 04:08:41.929995 | orchestrator | 2026-04-11 04:08:36 | INFO  | Setting image_original_user = ubuntu 2026-04-11 04:08:41.930004 | orchestrator | 2026-04-11 04:08:36 | INFO  | Adding tag amphora 2026-04-11 04:08:41.930012 | orchestrator | 2026-04-11 04:08:36 | INFO  | Adding tag os:ubuntu 2026-04-11 04:08:41.930067 | orchestrator | 2026-04-11 04:08:36 | INFO  | Setting property architecture: x86_64 2026-04-11 04:08:41.930075 | orchestrator | 2026-04-11 04:08:36 | INFO  | Setting property hw_disk_bus: scsi 2026-04-11 04:08:41.930084 | orchestrator | 2026-04-11 04:08:37 | INFO  | Setting property hw_rng_model: virtio 2026-04-11 04:08:41.930092 | orchestrator | 2026-04-11 04:08:37 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-11 04:08:41.930100 | orchestrator | 2026-04-11 04:08:37 | INFO  | Setting property hw_watchdog_action: reset 2026-04-11 04:08:41.930108 | orchestrator | 2026-04-11 04:08:37 | INFO  | Setting property hypervisor_type: qemu 2026-04-11 04:08:41.930116 | orchestrator | 2026-04-11 04:08:38 | INFO  | Setting property os_distro: ubuntu 2026-04-11 04:08:41.930124 | orchestrator | 2026-04-11 04:08:38 | INFO  | Setting property replace_frequency: quarterly 2026-04-11 04:08:41.930133 | orchestrator | 2026-04-11 04:08:38 | INFO  | Setting property uuid_validity: last-1 2026-04-11 04:08:41.930140 | orchestrator | 2026-04-11 04:08:38 | INFO  | Setting property provided_until: none 2026-04-11 04:08:41.930149 | orchestrator | 2026-04-11 04:08:39 | INFO  | Setting property os_purpose: network 2026-04-11 04:08:41.930169 | orchestrator | 2026-04-11 04:08:39 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-11 04:08:41.930178 | orchestrator | 2026-04-11 04:08:39 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-11 04:08:41.930186 | orchestrator | 2026-04-11 04:08:40 | INFO  | Setting property internal_version: 2026-04-11 2026-04-11 04:08:41.930194 | orchestrator | 2026-04-11 04:08:40 | INFO  | Setting property image_original_user: ubuntu 2026-04-11 04:08:41.930202 | orchestrator | 2026-04-11 04:08:40 | INFO  | Setting property os_version: 2026-04-11 2026-04-11 04:08:41.930211 | orchestrator | 2026-04-11 04:08:40 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260411.qcow2 2026-04-11 04:08:41.930221 | orchestrator | 2026-04-11 04:08:41 | INFO  | Setting property image_build_date: 2026-04-11 2026-04-11 04:08:41.930230 | orchestrator | 2026-04-11 04:08:41 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-11' 2026-04-11 04:08:41.930239 | orchestrator | 2026-04-11 04:08:41 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-11' 2026-04-11 04:08:41.930264 | orchestrator | 2026-04-11 04:08:41 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-11 04:08:41.930275 | orchestrator | 2026-04-11 04:08:41 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-11 04:08:41.930285 | orchestrator | 2026-04-11 04:08:41 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-11 04:08:41.930295 | orchestrator | 2026-04-11 04:08:41 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-11 04:08:42.462014 | orchestrator | ok: Runtime: 0:03:15.260701 2026-04-11 04:08:42.481970 | 2026-04-11 04:08:42.482115 | TASK [Run checks] 2026-04-11 04:08:43.353662 | orchestrator | + set -e 2026-04-11 04:08:43.353813 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 04:08:43.353829 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 04:08:43.353843 | orchestrator | ++ INTERACTIVE=false 2026-04-11 04:08:43.353851 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 04:08:43.353858 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 04:08:43.353866 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 04:08:43.354790 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 04:08:43.361884 | orchestrator | 2026-04-11 04:08:43.361984 | orchestrator | # CHECK 2026-04-11 04:08:43.361994 | orchestrator | 2026-04-11 04:08:43.362002 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 04:08:43.362013 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 04:08:43.362054 | orchestrator | + echo 2026-04-11 04:08:43.362061 | orchestrator | + echo '# CHECK' 2026-04-11 04:08:43.362067 | orchestrator | + echo 2026-04-11 04:08:43.362078 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-11 04:08:43.362887 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-11 04:08:43.422228 | orchestrator | 2026-04-11 04:08:43.422338 | orchestrator | ## Containers @ testbed-manager 2026-04-11 04:08:43.422354 | orchestrator | 2026-04-11 04:08:43.422374 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 04:08:43.422395 | orchestrator | + echo 2026-04-11 04:08:43.422414 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-11 04:08:43.422434 | orchestrator | + echo 2026-04-11 04:08:43.422453 | orchestrator | + osism container testbed-manager ps 2026-04-11 04:08:45.843329 | orchestrator | 2026-04-11 04:08:45 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-11 04:08:46.211414 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-11 04:08:46.211557 | orchestrator | 66f6da3b89e1 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-04-11 04:08:46.211575 | orchestrator | 9f159d561e50 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-04-11 04:08:46.211588 | orchestrator | ed6dff0cb31e registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-11 04:08:46.211596 | orchestrator | 18b8e9001b8a registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-11 04:08:46.211602 | orchestrator | 4fd10d676ace registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-04-11 04:08:46.211614 | orchestrator | 69833ca710a3 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up About an hour cephclient 2026-04-11 04:08:46.211622 | orchestrator | e9c7c85b3e4d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-11 04:08:46.211629 | orchestrator | 9689449c4ab4 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-11 04:08:46.211660 | orchestrator | 4e11bf7c0655 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-11 04:08:46.211668 | orchestrator | a21b36f1523a registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-04-11 04:08:46.211675 | orchestrator | ccfe7e33f249 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-04-11 04:08:46.211682 | orchestrator | 8665e275d4a5 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-04-11 04:08:46.211690 | orchestrator | baa8af4999aa registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-04-11 04:08:46.211697 | orchestrator | 803326b0285d registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-11 04:08:46.211723 | orchestrator | 3ae40a157bd6 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-04-11 04:08:46.211731 | orchestrator | 452e9234165f registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-04-11 04:08:46.211738 | orchestrator | a751517e33ae registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-04-11 04:08:46.211745 | orchestrator | 5861c9f157d0 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-04-11 04:08:46.211752 | orchestrator | 3692e141b33e registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-04-11 04:08:46.211759 | orchestrator | 16ba2d0a41f8 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-11 04:08:46.211767 | orchestrator | d20b6785b8e8 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-04-11 04:08:46.211774 | orchestrator | 42f0b2bd412f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-04-11 04:08:46.211789 | orchestrator | e21dd2b18e72 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-04-11 04:08:46.211796 | orchestrator | e8a0a4d4c6ac registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-11 04:08:46.211803 | orchestrator | 3c3c01ad4b46 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-11 04:08:46.211810 | orchestrator | 10a006a89fd9 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-11 04:08:46.211817 | orchestrator | a2ec40dd38c6 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-04-11 04:08:46.211931 | orchestrator | bfabd2a828c6 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-04-11 04:08:46.212117 | orchestrator | aded781efa65 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-04-11 04:08:46.212130 | orchestrator | ef27e5528914 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-11 04:08:46.603261 | orchestrator | 2026-04-11 04:08:46.603357 | orchestrator | ## Images @ testbed-manager 2026-04-11 04:08:46.603368 | orchestrator | 2026-04-11 04:08:46.603377 | orchestrator | + echo 2026-04-11 04:08:46.603386 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-11 04:08:46.603395 | orchestrator | + echo 2026-04-11 04:08:46.603407 | orchestrator | + osism container testbed-manager images 2026-04-11 04:08:49.120525 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-11 04:08:49.120623 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 27fe929207d0 24 hours ago 246MB 2026-04-11 04:08:49.120647 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-11 04:08:49.120655 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-11 04:08:49.120663 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-11 04:08:49.120673 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-11 04:08:49.120680 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-11 04:08:49.120688 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-11 04:08:49.120695 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-11 04:08:49.120702 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-11 04:08:49.120734 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-11 04:08:49.120742 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-11 04:08:49.120749 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-11 04:08:49.120756 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-11 04:08:49.120764 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-11 04:08:49.120771 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-11 04:08:49.120778 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-11 04:08:49.120785 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-11 04:08:49.120792 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-11 04:08:49.120800 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-11 04:08:49.120807 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-11 04:08:49.120814 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-11 04:08:49.120821 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-11 04:08:49.120828 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-11 04:08:49.120835 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-11 04:08:49.120842 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-11 04:08:49.500740 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-11 04:08:49.501153 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-11 04:08:49.568343 | orchestrator | 2026-04-11 04:08:49.568435 | orchestrator | ## Containers @ testbed-node-0 2026-04-11 04:08:49.568451 | orchestrator | 2026-04-11 04:08:49.568465 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 04:08:49.568481 | orchestrator | + echo 2026-04-11 04:08:49.568518 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-11 04:08:49.568530 | orchestrator | + echo 2026-04-11 04:08:49.568541 | orchestrator | + osism container testbed-node-0 ps 2026-04-11 04:08:52.338069 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-11 04:08:52.338158 | orchestrator | 4b7a021059fe registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-11 04:08:52.338169 | orchestrator | 4b3386effb97 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-11 04:08:52.338177 | orchestrator | 87f55f2fa958 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-11 04:08:52.338183 | orchestrator | f80176411f4f registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-11 04:08:52.338211 | orchestrator | 24d2ceb014f4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-11 04:08:52.338218 | orchestrator | 211083bc3c18 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-11 04:08:52.338231 | orchestrator | 2f92e11d3461 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-11 04:08:52.338238 | orchestrator | 054b6bb968b0 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-11 04:08:52.338245 | orchestrator | 1b9e88c7fd77 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-11 04:08:52.339159 | orchestrator | 1f39c9acb636 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-11 04:08:52.339219 | orchestrator | ebf5ba62ecce registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-11 04:08:52.339228 | orchestrator | 9f2ce1f03b68 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-11 04:08:52.339235 | orchestrator | 06b2d6ca9446 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-11 04:08:52.339241 | orchestrator | 5458367622f3 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-11 04:08:52.339247 | orchestrator | 7f7edf3b3cc1 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-11 04:08:52.339253 | orchestrator | 23667a7a3fa8 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-11 04:08:52.339271 | orchestrator | 283f0cd105bb registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-11 04:08:52.339278 | orchestrator | 82e0653b924f registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-11 04:08:52.339284 | orchestrator | 44a57af9678e registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-11 04:08:52.339290 | orchestrator | aa7a4bb9f727 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-11 04:08:52.339296 | orchestrator | 2f79758f1f19 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-11 04:08:52.339302 | orchestrator | ea28106e0c65 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-11 04:08:52.339321 | orchestrator | ceb4b8a4275f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-04-11 04:08:52.339327 | orchestrator | ccbf62a98f35 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-11 04:08:52.339333 | orchestrator | e0f3b88d9807 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-11 04:08:52.339343 | orchestrator | 118fac2e617f registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-11 04:08:52.339350 | orchestrator | f96e1c1a892b registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-11 04:08:52.339355 | orchestrator | f8d6ca088422 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-11 04:08:52.339361 | orchestrator | 88ce813440c5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-11 04:08:52.339380 | orchestrator | 49b0779c1ed9 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-11 04:08:52.339386 | orchestrator | d8ab534b5086 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-11 04:08:52.339394 | orchestrator | 9ec93a72339b registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-11 04:08:52.339408 | orchestrator | 7087c5838cb8 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-11 04:08:52.339414 | orchestrator | 235a1ee887a9 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-11 04:08:52.339420 | orchestrator | a1fd38fd6b33 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-11 04:08:52.339425 | orchestrator | 880f84acb0e0 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-11 04:08:52.339431 | orchestrator | 9765f0fc92f5 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-11 04:08:52.339437 | orchestrator | caca06f80ce6 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-11 04:08:52.339450 | orchestrator | 189bc58e3fdd registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-11 04:08:52.339469 | orchestrator | 9eecf22cec67 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-11 04:08:52.339480 | orchestrator | 7e7747251522 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-11 04:08:52.339529 | orchestrator | b606805da77b registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-11 04:08:52.339539 | orchestrator | 5d9fe0e5122e registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-11 04:08:52.339548 | orchestrator | 7346cfc3c509 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-11 04:08:52.339557 | orchestrator | 5fc7a0fb0ecb registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-11 04:08:52.339566 | orchestrator | 20d5c4594b1d registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-04-11 04:08:52.339575 | orchestrator | 5722867ec64b registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone 2026-04-11 04:08:52.339584 | orchestrator | d6b9ff14e1bf registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-11 04:08:52.339594 | orchestrator | e0bd669e368c registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone_ssh 2026-04-11 04:08:52.339603 | orchestrator | b6778221c7da registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-0 2026-04-11 04:08:52.339621 | orchestrator | ea3d5ced2ba2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-11 04:08:52.339636 | orchestrator | 1b0d6fe4ad27 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-04-11 04:08:52.339644 | orchestrator | 186152b74324 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-11 04:08:52.339650 | orchestrator | 511d46e24dff registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-11 04:08:52.339657 | orchestrator | 14a7ecd4081f registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-11 04:08:52.339665 | orchestrator | 20613daea051 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-11 04:08:52.339671 | orchestrator | 3de35f1b66d4 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-11 04:08:52.339685 | orchestrator | bdad7c655590 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-11 04:08:52.339691 | orchestrator | 7b6d09c14d1a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-11 04:08:52.339698 | orchestrator | d95fc32b1ba5 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-11 04:08:52.339705 | orchestrator | 2bbba3bd4617 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-11 04:08:52.339712 | orchestrator | 161eebb359bc registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-11 04:08:52.339718 | orchestrator | f636e603368b registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-11 04:08:52.339725 | orchestrator | 68e97c26c422 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-11 04:08:52.339732 | orchestrator | 768406cf2455 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-11 04:08:52.339739 | orchestrator | 9ace8b8255a1 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-11 04:08:52.339746 | orchestrator | 027e2ebbce80 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-11 04:08:52.339763 | orchestrator | a3d6bd12c99d registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-11 04:08:52.339771 | orchestrator | caa1e58514cf registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-11 04:08:52.339784 | orchestrator | c2d09f1d18fe registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-11 04:08:52.339795 | orchestrator | af3cb0c3aff3 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-11 04:08:52.717897 | orchestrator | 2026-04-11 04:08:52.717994 | orchestrator | ## Images @ testbed-node-0 2026-04-11 04:08:52.718007 | orchestrator | 2026-04-11 04:08:52.718068 | orchestrator | + echo 2026-04-11 04:08:52.718078 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-11 04:08:52.718087 | orchestrator | + echo 2026-04-11 04:08:52.718096 | orchestrator | + osism container testbed-node-0 images 2026-04-11 04:08:55.483023 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-11 04:08:55.483172 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-11 04:08:55.483200 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-11 04:08:55.483218 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-11 04:08:55.483285 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-11 04:08:55.483306 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-11 04:08:55.483327 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-11 04:08:55.483339 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-11 04:08:55.483350 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-11 04:08:55.483361 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-11 04:08:55.483372 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-11 04:08:55.483383 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-11 04:08:55.483393 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-11 04:08:55.483404 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-11 04:08:55.483414 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-11 04:08:55.483425 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-11 04:08:55.483452 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-11 04:08:55.483463 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-11 04:08:55.483570 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-11 04:08:55.483585 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-11 04:08:55.483596 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-11 04:08:55.483607 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-11 04:08:55.483617 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-11 04:08:55.483628 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-11 04:08:55.483638 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-11 04:08:55.483656 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-11 04:08:55.483666 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-11 04:08:55.483677 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-11 04:08:55.483687 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-11 04:08:55.483698 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-11 04:08:55.483720 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-11 04:08:55.483731 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-11 04:08:55.483741 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-11 04:08:55.483752 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-11 04:08:55.483799 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-11 04:08:55.483811 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-11 04:08:55.483821 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-11 04:08:55.483832 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-11 04:08:55.483843 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-11 04:08:55.483853 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-11 04:08:55.483877 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-11 04:08:55.483888 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-11 04:08:55.483899 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-11 04:08:55.483910 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-11 04:08:55.483920 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-11 04:08:55.483931 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-11 04:08:55.483942 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-11 04:08:55.483960 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-11 04:08:55.483971 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-11 04:08:55.483981 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-11 04:08:55.483992 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-11 04:08:55.484003 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-11 04:08:55.484014 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-11 04:08:55.484024 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-11 04:08:55.484035 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-11 04:08:55.484046 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-11 04:08:55.484065 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-11 04:08:55.484075 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-11 04:08:55.484086 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-11 04:08:55.484096 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-11 04:08:55.484107 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-11 04:08:55.484118 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-11 04:08:55.484128 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-11 04:08:55.484139 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-11 04:08:55.484150 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-11 04:08:55.484160 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-11 04:08:55.484171 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-11 04:08:55.484182 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-11 04:08:55.484192 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-11 04:08:55.484203 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-11 04:08:55.876760 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-11 04:08:55.876836 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-11 04:08:55.932207 | orchestrator | 2026-04-11 04:08:55.932308 | orchestrator | ## Containers @ testbed-node-1 2026-04-11 04:08:55.932325 | orchestrator | 2026-04-11 04:08:55.932335 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 04:08:55.932344 | orchestrator | + echo 2026-04-11 04:08:55.932354 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-11 04:08:55.932363 | orchestrator | + echo 2026-04-11 04:08:55.932372 | orchestrator | + osism container testbed-node-1 ps 2026-04-11 04:08:58.518955 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-11 04:08:58.519045 | orchestrator | 26ab76a2f1d9 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-11 04:08:58.519058 | orchestrator | 7bf4557b27b1 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-11 04:08:58.519092 | orchestrator | e6f863942636 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-11 04:08:58.519101 | orchestrator | b65b5a57b056 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-11 04:08:58.519128 | orchestrator | 642a5cbb2d05 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-11 04:08:58.519160 | orchestrator | 13935b709f5c registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-11 04:08:58.519170 | orchestrator | 494fcc340555 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-11 04:08:58.519183 | orchestrator | fa0db8fe3af0 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-11 04:08:58.519193 | orchestrator | b859bdf33a28 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-11 04:08:58.519202 | orchestrator | c2065ded2c35 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-11 04:08:58.519210 | orchestrator | a18fb19a619b registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-11 04:08:58.519219 | orchestrator | e157a3d87b82 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-11 04:08:58.519228 | orchestrator | 03f7eb11e660 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-11 04:08:58.519237 | orchestrator | 9144e3ca3764 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-11 04:08:58.519246 | orchestrator | c45f8ec55549 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-11 04:08:58.519255 | orchestrator | a461c065f2ba registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-11 04:08:58.519264 | orchestrator | 25f5373c24b6 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-11 04:08:58.519272 | orchestrator | af896bd185fa registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-11 04:08:58.519282 | orchestrator | bd9df78838b9 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-11 04:08:58.519306 | orchestrator | 2d325b3f412a registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-11 04:08:58.519315 | orchestrator | 227f21f4f9a5 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-11 04:08:58.519324 | orchestrator | 3d2e5724a2f1 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-11 04:08:58.519333 | orchestrator | 145609a8049c registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-04-11 04:08:58.519349 | orchestrator | 79ae3c06541a registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-11 04:08:58.519358 | orchestrator | 0abc9e8cf605 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-11 04:08:58.519367 | orchestrator | 1fbdf31b796e registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-11 04:08:58.519376 | orchestrator | e366b34c0ec5 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-11 04:08:58.519390 | orchestrator | d557077a8da7 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-11 04:08:58.519399 | orchestrator | d26f4e0d4430 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-11 04:08:58.519408 | orchestrator | 436617ff05bd registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-11 04:08:58.519416 | orchestrator | be5f60d595a8 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-11 04:08:58.519425 | orchestrator | 88f66862ec7c registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-11 04:08:58.519434 | orchestrator | 5c7f7f863eb3 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-11 04:08:58.519442 | orchestrator | 5f8e9ba84bef registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-11 04:08:58.519451 | orchestrator | 7b7211f3a195 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-11 04:08:58.519460 | orchestrator | 5018ca84e682 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-04-11 04:08:58.519469 | orchestrator | 37c010365175 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-11 04:08:58.519478 | orchestrator | 0b33a70469cb registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-11 04:08:58.519537 | orchestrator | d2ddd2c0ff6f registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-11 04:08:58.519557 | orchestrator | bef070296495 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-11 04:08:58.519569 | orchestrator | c592dd4ca9e2 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-11 04:08:58.519586 | orchestrator | 2cdc54ea0681 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-11 04:08:58.519596 | orchestrator | 4e5546ea6570 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-11 04:08:58.519606 | orchestrator | 05de838db269 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-11 04:08:58.519616 | orchestrator | 421b53212b9f registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-11 04:08:58.519626 | orchestrator | 5ea481261fff registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-04-11 04:08:58.519648 | orchestrator | c2de825d65b9 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone 2026-04-11 04:08:58.519659 | orchestrator | 4d11e5366a18 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-11 04:08:58.519668 | orchestrator | 8b98e97e813f registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-11 04:08:58.519689 | orchestrator | 8eed782f152a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-1 2026-04-11 04:08:58.519699 | orchestrator | d9483a734754 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-11 04:08:58.519709 | orchestrator | 1a56ecc96cb4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-04-11 04:08:58.519720 | orchestrator | b4b589e61415 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-11 04:08:58.519730 | orchestrator | 652f0b815ba1 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-11 04:08:58.519745 | orchestrator | 9d4bbe00a1e9 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-11 04:08:58.519756 | orchestrator | eb082951e08c registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-11 04:08:58.519767 | orchestrator | 53c952c66f24 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-11 04:08:58.519777 | orchestrator | b619ec687652 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-11 04:08:58.519793 | orchestrator | 13c53a0cf6e3 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-11 04:08:58.519809 | orchestrator | c1f28bbb8814 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-11 04:08:58.519819 | orchestrator | d41820dc36f8 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-11 04:08:58.519829 | orchestrator | f1070aedf47e registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-11 04:08:58.519840 | orchestrator | c9e203b770dd registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-11 04:08:58.519850 | orchestrator | dd3b1a0da393 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-11 04:08:58.519860 | orchestrator | c6a0232b4bfa registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-11 04:08:58.519871 | orchestrator | 205ddd1bc271 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-11 04:08:58.519881 | orchestrator | 27980a4d244e registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-11 04:08:58.519890 | orchestrator | 37d5ddbc1148 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-11 04:08:58.519899 | orchestrator | 9c8dec6f9202 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-11 04:08:58.519907 | orchestrator | 04c9d9ed7e38 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-11 04:08:58.519916 | orchestrator | 57196435164f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-11 04:08:58.913003 | orchestrator | 2026-04-11 04:08:58.913095 | orchestrator | ## Images @ testbed-node-1 2026-04-11 04:08:58.913109 | orchestrator | 2026-04-11 04:08:58.913117 | orchestrator | + echo 2026-04-11 04:08:58.913125 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-11 04:08:58.913132 | orchestrator | + echo 2026-04-11 04:08:58.913150 | orchestrator | + osism container testbed-node-1 images 2026-04-11 04:09:01.600709 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-11 04:09:01.600833 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-11 04:09:01.600856 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-11 04:09:01.600874 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-11 04:09:01.600891 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-11 04:09:01.600908 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-11 04:09:01.600971 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-11 04:09:01.600992 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-11 04:09:01.601008 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-11 04:09:01.601023 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-11 04:09:01.601039 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-11 04:09:01.601054 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-11 04:09:01.601069 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-11 04:09:01.601101 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-11 04:09:01.601115 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-11 04:09:01.601128 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-11 04:09:01.601141 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-11 04:09:01.601154 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-11 04:09:01.601167 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-11 04:09:01.601182 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-11 04:09:01.601216 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-11 04:09:01.601230 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-11 04:09:01.601243 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-11 04:09:01.601255 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-11 04:09:01.601266 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-11 04:09:01.601278 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-11 04:09:01.601289 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-11 04:09:01.601308 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-11 04:09:01.601321 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-11 04:09:01.601335 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-11 04:09:01.601349 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-11 04:09:01.601363 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-11 04:09:01.601390 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-11 04:09:01.601774 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-11 04:09:01.601794 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-11 04:09:01.601803 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-11 04:09:01.601811 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-11 04:09:01.601819 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-11 04:09:01.601827 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-11 04:09:01.601835 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-11 04:09:01.601842 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-11 04:09:01.601850 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-11 04:09:01.601858 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-11 04:09:01.601865 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-11 04:09:01.601873 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-11 04:09:01.601881 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-11 04:09:01.601889 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-11 04:09:01.601897 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-11 04:09:01.601904 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-11 04:09:01.601912 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-11 04:09:01.601920 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-11 04:09:01.601927 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-11 04:09:01.601935 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-11 04:09:01.601943 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-11 04:09:01.601950 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-11 04:09:01.601958 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-11 04:09:01.601966 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-11 04:09:01.601974 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-11 04:09:01.601992 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-11 04:09:01.602000 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-11 04:09:01.602008 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-11 04:09:01.602066 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-11 04:09:01.602077 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-11 04:09:01.602085 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-11 04:09:01.602113 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-11 04:09:01.602131 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-11 04:09:01.602140 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-11 04:09:01.602147 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-11 04:09:01.602155 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-11 04:09:01.602163 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-11 04:09:01.992794 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-11 04:09:01.992910 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-11 04:09:02.051811 | orchestrator | 2026-04-11 04:09:02.051903 | orchestrator | ## Containers @ testbed-node-2 2026-04-11 04:09:02.051914 | orchestrator | 2026-04-11 04:09:02.051922 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 04:09:02.051929 | orchestrator | + echo 2026-04-11 04:09:02.051936 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-11 04:09:02.051944 | orchestrator | + echo 2026-04-11 04:09:02.051951 | orchestrator | + osism container testbed-node-2 ps 2026-04-11 04:09:04.780977 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-11 04:09:04.781098 | orchestrator | b41281216d73 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-11 04:09:04.781116 | orchestrator | 5fa7f81b3c3a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-11 04:09:04.781127 | orchestrator | 76951792ffe2 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-11 04:09:04.781135 | orchestrator | c198c70cf5eb registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-04-11 04:09:04.781147 | orchestrator | f8e308f3c96b registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-11 04:09:04.781156 | orchestrator | c64e189cc4e0 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-11 04:09:04.781166 | orchestrator | 5b62570099e9 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-11 04:09:04.781197 | orchestrator | e08dfbb4264a registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-11 04:09:04.781203 | orchestrator | 88229b2dcc2f registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-11 04:09:04.781208 | orchestrator | f8edede50445 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-11 04:09:04.781214 | orchestrator | 5d2976d4e377 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-11 04:09:04.781223 | orchestrator | ad13cf6724b2 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-11 04:09:04.781229 | orchestrator | 6e8ae3d19000 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-11 04:09:04.781234 | orchestrator | f56b770ec4eb registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-11 04:09:04.781239 | orchestrator | f9a0f33d4b6c registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-11 04:09:04.781244 | orchestrator | d3c1ab1b452a registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-04-11 04:09:04.781249 | orchestrator | cdf60ebe55be registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-11 04:09:04.781254 | orchestrator | f8d01d323b8e registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-11 04:09:04.781259 | orchestrator | 2a892b2953b1 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-11 04:09:04.781277 | orchestrator | 92a8bf35f4db registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-11 04:09:04.781283 | orchestrator | 75cb81e4fe23 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-11 04:09:04.781288 | orchestrator | 3ad5a2c90490 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes octavia_driver_agent 2026-04-11 04:09:04.781293 | orchestrator | 1ade2f51326f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-04-11 04:09:04.781298 | orchestrator | 887cd552069c registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-11 04:09:04.781309 | orchestrator | b17f44b64d23 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-11 04:09:04.781314 | orchestrator | 8b7f60b670e0 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-11 04:09:04.781319 | orchestrator | bb700c7ec3bc registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-11 04:09:04.781324 | orchestrator | 711989f41acf registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-11 04:09:04.781330 | orchestrator | 34f3e0b4d4b1 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-11 04:09:04.781335 | orchestrator | d891459497d2 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-11 04:09:04.781340 | orchestrator | 44513fa2b5c1 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-11 04:09:04.781345 | orchestrator | 2dd11c19a8b9 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-11 04:09:04.781350 | orchestrator | 79ce18a4eb5f registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-11 04:09:04.781355 | orchestrator | c3575631754f registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-11 04:09:04.781360 | orchestrator | b2f51f069ec2 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-11 04:09:04.781365 | orchestrator | d30ac002792e registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-11 04:09:04.781371 | orchestrator | 3d628e9212f6 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-11 04:09:04.781376 | orchestrator | 8441462c2849 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-11 04:09:04.781384 | orchestrator | 3f3ec7ad791e registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-11 04:09:04.781394 | orchestrator | 29746a63d5c9 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-11 04:09:04.781399 | orchestrator | 434652e40c24 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-11 04:09:04.781405 | orchestrator | c83f239c93be registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-11 04:09:04.781413 | orchestrator | e40f096cb37d registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-11 04:09:04.781419 | orchestrator | 787e61a9fb11 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-11 04:09:04.781424 | orchestrator | 3d6ff33b58dd registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-11 04:09:04.781429 | orchestrator | 5ee04894f2ae registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) placement_api 2026-04-11 04:09:04.781434 | orchestrator | e674008d649f registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone 2026-04-11 04:09:04.781439 | orchestrator | 55af932a1331 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-11 04:09:04.781444 | orchestrator | f3a3f7ffb461 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-11 04:09:04.781449 | orchestrator | ebd59e6a275d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-2 2026-04-11 04:09:04.781454 | orchestrator | ef8b265ebbe1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-11 04:09:04.781459 | orchestrator | f023dde40a6c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-04-11 04:09:04.781467 | orchestrator | e473d25f38c7 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-11 04:09:04.781473 | orchestrator | 321bdd7b12a6 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-11 04:09:04.781478 | orchestrator | 5a563ea65456 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-11 04:09:04.781502 | orchestrator | f401b729a043 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-11 04:09:04.781510 | orchestrator | cc676220ef5d registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-11 04:09:04.781515 | orchestrator | 264e40acb745 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-11 04:09:04.781521 | orchestrator | d227a697f778 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-11 04:09:04.781530 | orchestrator | 22347cd7dd4b registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-11 04:09:04.781540 | orchestrator | 162fa9886107 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-11 04:09:04.781546 | orchestrator | c9b016eec4bc registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-11 04:09:04.781551 | orchestrator | 0906fa63458b registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-11 04:09:04.781556 | orchestrator | cc2dca7170f6 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-11 04:09:04.781561 | orchestrator | 9d2a15775d35 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-11 04:09:04.781566 | orchestrator | 0ec56b81f1c9 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-11 04:09:04.781571 | orchestrator | b5249b60d8bf registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-11 04:09:04.781577 | orchestrator | ca3254b4eb50 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-11 04:09:04.781582 | orchestrator | 9dfe3de68e37 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-11 04:09:04.781587 | orchestrator | 87cccf4da1e8 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-11 04:09:04.781592 | orchestrator | 903e437d5d3f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-11 04:09:05.178577 | orchestrator | 2026-04-11 04:09:05.178686 | orchestrator | ## Images @ testbed-node-2 2026-04-11 04:09:05.178704 | orchestrator | 2026-04-11 04:09:05.178716 | orchestrator | + echo 2026-04-11 04:09:05.178727 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-11 04:09:05.178739 | orchestrator | + echo 2026-04-11 04:09:05.178751 | orchestrator | + osism container testbed-node-2 images 2026-04-11 04:09:07.882898 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-11 04:09:07.883030 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-11 04:09:07.883049 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-11 04:09:07.883060 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-11 04:09:07.883067 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-11 04:09:07.883073 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-11 04:09:07.883080 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-11 04:09:07.883086 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-11 04:09:07.883116 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-11 04:09:07.883123 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-11 04:09:07.883130 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-11 04:09:07.883145 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-11 04:09:07.883154 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-11 04:09:07.883164 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-11 04:09:07.883173 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-11 04:09:07.883200 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-11 04:09:07.883217 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-11 04:09:07.883227 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-11 04:09:07.883237 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-11 04:09:07.883247 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-11 04:09:07.883258 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-11 04:09:07.883267 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-11 04:09:07.883277 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-11 04:09:07.883286 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-11 04:09:07.883296 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-11 04:09:07.883306 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-11 04:09:07.883315 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-11 04:09:07.883326 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-11 04:09:07.883335 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-11 04:09:07.883345 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-11 04:09:07.883354 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-11 04:09:07.883364 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-11 04:09:07.883392 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-11 04:09:07.883403 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-11 04:09:07.883429 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-11 04:09:07.883440 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-11 04:09:07.883449 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-11 04:09:07.883459 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-11 04:09:07.883468 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-11 04:09:07.883479 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-11 04:09:07.883510 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-11 04:09:07.883521 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-11 04:09:07.883530 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-11 04:09:07.883540 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-11 04:09:07.883550 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-11 04:09:07.883665 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-11 04:09:07.883682 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-11 04:09:07.883693 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-11 04:09:07.883704 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-11 04:09:07.883715 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-11 04:09:07.883856 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-11 04:09:07.883874 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-11 04:09:07.883884 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-11 04:09:07.883894 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-11 04:09:07.883905 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-11 04:09:07.883916 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-11 04:09:07.883925 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-11 04:09:07.883936 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-11 04:09:07.883947 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-11 04:09:07.883958 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-11 04:09:07.883982 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-11 04:09:07.883992 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-11 04:09:07.884002 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-11 04:09:07.884013 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-11 04:09:07.884024 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-11 04:09:07.884034 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-11 04:09:07.884044 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-11 04:09:07.884053 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-11 04:09:07.884063 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-11 04:09:07.884072 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-11 04:09:08.295075 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-11 04:09:08.305902 | orchestrator | + set -e 2026-04-11 04:09:08.305997 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 04:09:08.306010 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 04:09:08.306068 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 04:09:08.306077 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 04:09:08.306778 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 04:09:08.306833 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 04:09:08.306846 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 04:09:08.306857 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 04:09:08.306866 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 04:09:08.306874 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 04:09:08.306883 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 04:09:08.306892 | orchestrator | ++ export ARA=false 2026-04-11 04:09:08.306901 | orchestrator | ++ ARA=false 2026-04-11 04:09:08.306910 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 04:09:08.306918 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 04:09:08.306927 | orchestrator | ++ export TEMPEST=false 2026-04-11 04:09:08.306935 | orchestrator | ++ TEMPEST=false 2026-04-11 04:09:08.306944 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 04:09:08.306952 | orchestrator | ++ IS_ZUUL=true 2026-04-11 04:09:08.306961 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:09:08.306969 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:09:08.306978 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 04:09:08.306986 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 04:09:08.306995 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 04:09:08.307003 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 04:09:08.307013 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 04:09:08.307028 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 04:09:08.307042 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 04:09:08.307056 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 04:09:08.307071 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 04:09:08.307086 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-11 04:09:08.320447 | orchestrator | + set -e 2026-04-11 04:09:08.320563 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 04:09:08.320581 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 04:09:08.320597 | orchestrator | ++ INTERACTIVE=false 2026-04-11 04:09:08.320622 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 04:09:08.320646 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 04:09:08.320659 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 04:09:08.321630 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 04:09:08.327878 | orchestrator | 2026-04-11 04:09:08.327978 | orchestrator | # Ceph status 2026-04-11 04:09:08.328005 | orchestrator | 2026-04-11 04:09:08.328027 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 04:09:08.328048 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 04:09:08.328069 | orchestrator | + echo 2026-04-11 04:09:08.328089 | orchestrator | + echo '# Ceph status' 2026-04-11 04:09:08.328112 | orchestrator | + echo 2026-04-11 04:09:08.328134 | orchestrator | + ceph -s 2026-04-11 04:09:08.952840 | orchestrator | cluster: 2026-04-11 04:09:08.952948 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-11 04:09:08.952967 | orchestrator | health: HEALTH_OK 2026-04-11 04:09:08.952977 | orchestrator | 2026-04-11 04:09:08.952985 | orchestrator | services: 2026-04-11 04:09:08.952993 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 72m) 2026-04-11 04:09:08.953002 | orchestrator | mgr: testbed-node-0(active, since 59m), standbys: testbed-node-1, testbed-node-2 2026-04-11 04:09:08.953011 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-11 04:09:08.953019 | orchestrator | osd: 6 osds: 6 up (since 68m), 6 in (since 69m) 2026-04-11 04:09:08.953027 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-11 04:09:08.953034 | orchestrator | 2026-04-11 04:09:08.953042 | orchestrator | data: 2026-04-11 04:09:08.953049 | orchestrator | volumes: 1/1 healthy 2026-04-11 04:09:08.953056 | orchestrator | pools: 14 pools, 401 pgs 2026-04-11 04:09:08.953064 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-11 04:09:08.953071 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-04-11 04:09:08.953079 | orchestrator | pgs: 401 active+clean 2026-04-11 04:09:08.953086 | orchestrator | 2026-04-11 04:09:09.017706 | orchestrator | 2026-04-11 04:09:09.017792 | orchestrator | # Ceph versions 2026-04-11 04:09:09.017814 | orchestrator | 2026-04-11 04:09:09.017830 | orchestrator | + echo 2026-04-11 04:09:09.017842 | orchestrator | + echo '# Ceph versions' 2026-04-11 04:09:09.017856 | orchestrator | + echo 2026-04-11 04:09:09.017868 | orchestrator | + ceph versions 2026-04-11 04:09:09.663406 | orchestrator | { 2026-04-11 04:09:09.663582 | orchestrator | "mon": { 2026-04-11 04:09:09.663600 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-11 04:09:09.663611 | orchestrator | }, 2026-04-11 04:09:09.663622 | orchestrator | "mgr": { 2026-04-11 04:09:09.663631 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-11 04:09:09.663640 | orchestrator | }, 2026-04-11 04:09:09.663649 | orchestrator | "osd": { 2026-04-11 04:09:09.663658 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-11 04:09:09.663667 | orchestrator | }, 2026-04-11 04:09:09.663675 | orchestrator | "mds": { 2026-04-11 04:09:09.663684 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-11 04:09:09.663693 | orchestrator | }, 2026-04-11 04:09:09.663702 | orchestrator | "rgw": { 2026-04-11 04:09:09.663710 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-11 04:09:09.663719 | orchestrator | }, 2026-04-11 04:09:09.663732 | orchestrator | "overall": { 2026-04-11 04:09:09.663776 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-11 04:09:09.663795 | orchestrator | } 2026-04-11 04:09:09.663809 | orchestrator | } 2026-04-11 04:09:09.712960 | orchestrator | 2026-04-11 04:09:09.713039 | orchestrator | # Ceph OSD tree 2026-04-11 04:09:09.713048 | orchestrator | 2026-04-11 04:09:09.713056 | orchestrator | + echo 2026-04-11 04:09:09.713063 | orchestrator | + echo '# Ceph OSD tree' 2026-04-11 04:09:09.713070 | orchestrator | + echo 2026-04-11 04:09:09.713077 | orchestrator | + ceph osd df tree 2026-04-11 04:09:10.316166 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-11 04:09:10.316247 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 384 MiB 113 GiB 5.88 1.00 - root default 2026-04-11 04:09:10.316256 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-3 2026-04-11 04:09:10.316262 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.92 1.01 190 up osd.0 2026-04-11 04:09:10.316269 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.89 1.00 202 up osd.4 2026-04-11 04:09:10.316298 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 122 MiB 38 GiB 5.86 1.00 - host testbed-node-4 2026-04-11 04:09:10.316316 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 78 MiB 19 GiB 5.79 0.98 195 up osd.2 2026-04-11 04:09:10.316322 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 44 MiB 19 GiB 5.93 1.01 195 up osd.5 2026-04-11 04:09:10.316328 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-5 2026-04-11 04:09:10.316335 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.80 1.16 185 up osd.1 2026-04-11 04:09:10.316342 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1008 MiB 947 MiB 1 KiB 62 MiB 19 GiB 4.93 0.84 203 up osd.3 2026-04-11 04:09:10.316348 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 384 MiB 113 GiB 5.88 2026-04-11 04:09:10.316354 | orchestrator | MIN/MAX VAR: 0.84/1.16 STDDEV: 0.54 2026-04-11 04:09:10.372965 | orchestrator | 2026-04-11 04:09:10.373042 | orchestrator | # Ceph monitor status 2026-04-11 04:09:10.373051 | orchestrator | 2026-04-11 04:09:10.373057 | orchestrator | + echo 2026-04-11 04:09:10.373064 | orchestrator | + echo '# Ceph monitor status' 2026-04-11 04:09:10.373070 | orchestrator | + echo 2026-04-11 04:09:10.373076 | orchestrator | + ceph mon stat 2026-04-11 04:09:11.013695 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-11 04:09:11.078137 | orchestrator | 2026-04-11 04:09:11.078223 | orchestrator | # Ceph quorum status 2026-04-11 04:09:11.078235 | orchestrator | 2026-04-11 04:09:11.078243 | orchestrator | + echo 2026-04-11 04:09:11.078251 | orchestrator | + echo '# Ceph quorum status' 2026-04-11 04:09:11.078259 | orchestrator | + echo 2026-04-11 04:09:11.078266 | orchestrator | + ceph quorum_status 2026-04-11 04:09:11.078701 | orchestrator | + jq 2026-04-11 04:09:11.759392 | orchestrator | { 2026-04-11 04:09:11.760345 | orchestrator | "election_epoch": 6, 2026-04-11 04:09:11.760398 | orchestrator | "quorum": [ 2026-04-11 04:09:11.760411 | orchestrator | 0, 2026-04-11 04:09:11.760422 | orchestrator | 1, 2026-04-11 04:09:11.760430 | orchestrator | 2 2026-04-11 04:09:11.760438 | orchestrator | ], 2026-04-11 04:09:11.760447 | orchestrator | "quorum_names": [ 2026-04-11 04:09:11.760457 | orchestrator | "testbed-node-0", 2026-04-11 04:09:11.760465 | orchestrator | "testbed-node-1", 2026-04-11 04:09:11.760527 | orchestrator | "testbed-node-2" 2026-04-11 04:09:11.760537 | orchestrator | ], 2026-04-11 04:09:11.760547 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-11 04:09:11.760557 | orchestrator | "quorum_age": 4346, 2026-04-11 04:09:11.760565 | orchestrator | "features": { 2026-04-11 04:09:11.760575 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-11 04:09:11.760588 | orchestrator | "quorum_mon": [ 2026-04-11 04:09:11.760598 | orchestrator | "kraken", 2026-04-11 04:09:11.760607 | orchestrator | "luminous", 2026-04-11 04:09:11.760616 | orchestrator | "mimic", 2026-04-11 04:09:11.760625 | orchestrator | "osdmap-prune", 2026-04-11 04:09:11.760634 | orchestrator | "nautilus", 2026-04-11 04:09:11.760643 | orchestrator | "octopus", 2026-04-11 04:09:11.760652 | orchestrator | "pacific", 2026-04-11 04:09:11.760660 | orchestrator | "elector-pinging", 2026-04-11 04:09:11.760668 | orchestrator | "quincy", 2026-04-11 04:09:11.760677 | orchestrator | "reef" 2026-04-11 04:09:11.760686 | orchestrator | ] 2026-04-11 04:09:11.760695 | orchestrator | }, 2026-04-11 04:09:11.760704 | orchestrator | "monmap": { 2026-04-11 04:09:11.760713 | orchestrator | "epoch": 1, 2026-04-11 04:09:11.760723 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-11 04:09:11.760734 | orchestrator | "modified": "2026-04-11T02:56:22.811149Z", 2026-04-11 04:09:11.760743 | orchestrator | "created": "2026-04-11T02:56:22.811149Z", 2026-04-11 04:09:11.760752 | orchestrator | "min_mon_release": 18, 2026-04-11 04:09:11.760761 | orchestrator | "min_mon_release_name": "reef", 2026-04-11 04:09:11.760767 | orchestrator | "election_strategy": 1, 2026-04-11 04:09:11.760773 | orchestrator | "disallowed_leaders: ": "", 2026-04-11 04:09:11.760803 | orchestrator | "stretch_mode": false, 2026-04-11 04:09:11.760809 | orchestrator | "tiebreaker_mon": "", 2026-04-11 04:09:11.760814 | orchestrator | "removed_ranks: ": "", 2026-04-11 04:09:11.760820 | orchestrator | "features": { 2026-04-11 04:09:11.760825 | orchestrator | "persistent": [ 2026-04-11 04:09:11.760831 | orchestrator | "kraken", 2026-04-11 04:09:11.760836 | orchestrator | "luminous", 2026-04-11 04:09:11.760841 | orchestrator | "mimic", 2026-04-11 04:09:11.760847 | orchestrator | "osdmap-prune", 2026-04-11 04:09:11.760852 | orchestrator | "nautilus", 2026-04-11 04:09:11.760857 | orchestrator | "octopus", 2026-04-11 04:09:11.760863 | orchestrator | "pacific", 2026-04-11 04:09:11.760868 | orchestrator | "elector-pinging", 2026-04-11 04:09:11.760873 | orchestrator | "quincy", 2026-04-11 04:09:11.760879 | orchestrator | "reef" 2026-04-11 04:09:11.760884 | orchestrator | ], 2026-04-11 04:09:11.760889 | orchestrator | "optional": [] 2026-04-11 04:09:11.760895 | orchestrator | }, 2026-04-11 04:09:11.760900 | orchestrator | "mons": [ 2026-04-11 04:09:11.760907 | orchestrator | { 2026-04-11 04:09:11.760916 | orchestrator | "rank": 0, 2026-04-11 04:09:11.760927 | orchestrator | "name": "testbed-node-0", 2026-04-11 04:09:11.760940 | orchestrator | "public_addrs": { 2026-04-11 04:09:11.760948 | orchestrator | "addrvec": [ 2026-04-11 04:09:11.760957 | orchestrator | { 2026-04-11 04:09:11.760965 | orchestrator | "type": "v2", 2026-04-11 04:09:11.760975 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-11 04:09:11.760985 | orchestrator | "nonce": 0 2026-04-11 04:09:11.760993 | orchestrator | }, 2026-04-11 04:09:11.761002 | orchestrator | { 2026-04-11 04:09:11.761010 | orchestrator | "type": "v1", 2026-04-11 04:09:11.761015 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-11 04:09:11.761020 | orchestrator | "nonce": 0 2026-04-11 04:09:11.761026 | orchestrator | } 2026-04-11 04:09:11.761031 | orchestrator | ] 2026-04-11 04:09:11.761036 | orchestrator | }, 2026-04-11 04:09:11.761042 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-11 04:09:11.761047 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-11 04:09:11.761052 | orchestrator | "priority": 0, 2026-04-11 04:09:11.761058 | orchestrator | "weight": 0, 2026-04-11 04:09:11.761063 | orchestrator | "crush_location": "{}" 2026-04-11 04:09:11.761068 | orchestrator | }, 2026-04-11 04:09:11.761074 | orchestrator | { 2026-04-11 04:09:11.761079 | orchestrator | "rank": 1, 2026-04-11 04:09:11.761084 | orchestrator | "name": "testbed-node-1", 2026-04-11 04:09:11.761089 | orchestrator | "public_addrs": { 2026-04-11 04:09:11.761095 | orchestrator | "addrvec": [ 2026-04-11 04:09:11.761100 | orchestrator | { 2026-04-11 04:09:11.761105 | orchestrator | "type": "v2", 2026-04-11 04:09:11.761111 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-11 04:09:11.761116 | orchestrator | "nonce": 0 2026-04-11 04:09:11.761121 | orchestrator | }, 2026-04-11 04:09:11.761126 | orchestrator | { 2026-04-11 04:09:11.761132 | orchestrator | "type": "v1", 2026-04-11 04:09:11.761137 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-11 04:09:11.761142 | orchestrator | "nonce": 0 2026-04-11 04:09:11.761148 | orchestrator | } 2026-04-11 04:09:11.761153 | orchestrator | ] 2026-04-11 04:09:11.761158 | orchestrator | }, 2026-04-11 04:09:11.761164 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-11 04:09:11.761169 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-11 04:09:11.761174 | orchestrator | "priority": 0, 2026-04-11 04:09:11.761180 | orchestrator | "weight": 0, 2026-04-11 04:09:11.761185 | orchestrator | "crush_location": "{}" 2026-04-11 04:09:11.761190 | orchestrator | }, 2026-04-11 04:09:11.761196 | orchestrator | { 2026-04-11 04:09:11.761201 | orchestrator | "rank": 2, 2026-04-11 04:09:11.761206 | orchestrator | "name": "testbed-node-2", 2026-04-11 04:09:11.761212 | orchestrator | "public_addrs": { 2026-04-11 04:09:11.761217 | orchestrator | "addrvec": [ 2026-04-11 04:09:11.761223 | orchestrator | { 2026-04-11 04:09:11.761228 | orchestrator | "type": "v2", 2026-04-11 04:09:11.761234 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-11 04:09:11.761239 | orchestrator | "nonce": 0 2026-04-11 04:09:11.761244 | orchestrator | }, 2026-04-11 04:09:11.761249 | orchestrator | { 2026-04-11 04:09:11.761255 | orchestrator | "type": "v1", 2026-04-11 04:09:11.761260 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-11 04:09:11.761265 | orchestrator | "nonce": 0 2026-04-11 04:09:11.761278 | orchestrator | } 2026-04-11 04:09:11.761283 | orchestrator | ] 2026-04-11 04:09:11.761332 | orchestrator | }, 2026-04-11 04:09:11.761338 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-11 04:09:11.761343 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-11 04:09:11.761349 | orchestrator | "priority": 0, 2026-04-11 04:09:11.761366 | orchestrator | "weight": 0, 2026-04-11 04:09:11.761372 | orchestrator | "crush_location": "{}" 2026-04-11 04:09:11.761377 | orchestrator | } 2026-04-11 04:09:11.761382 | orchestrator | ] 2026-04-11 04:09:11.761388 | orchestrator | } 2026-04-11 04:09:11.761393 | orchestrator | } 2026-04-11 04:09:11.761399 | orchestrator | 2026-04-11 04:09:11.761404 | orchestrator | # Ceph free space status 2026-04-11 04:09:11.761410 | orchestrator | 2026-04-11 04:09:11.761415 | orchestrator | + echo 2026-04-11 04:09:11.761420 | orchestrator | + echo '# Ceph free space status' 2026-04-11 04:09:11.761426 | orchestrator | + echo 2026-04-11 04:09:11.761431 | orchestrator | + ceph df 2026-04-11 04:09:12.351387 | orchestrator | --- RAW STORAGE --- 2026-04-11 04:09:12.351642 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-11 04:09:12.351691 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.88 2026-04-11 04:09:12.351729 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.88 2026-04-11 04:09:12.351765 | orchestrator | 2026-04-11 04:09:12.351787 | orchestrator | --- POOLS --- 2026-04-11 04:09:12.351808 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-11 04:09:12.351831 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-11 04:09:12.351851 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-11 04:09:12.351872 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-11 04:09:12.351892 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-11 04:09:12.351906 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-11 04:09:12.351920 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-11 04:09:12.351933 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-11 04:09:12.351946 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-11 04:09:12.351959 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-11 04:09:12.351972 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-11 04:09:12.351985 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-11 04:09:12.351997 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2026-04-11 04:09:12.352010 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-11 04:09:12.352022 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-11 04:09:12.398623 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-11 04:09:12.456925 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 04:09:12.456996 | orchestrator | + osism apply facts 2026-04-11 04:09:14.749324 | orchestrator | 2026-04-11 04:09:14 | INFO  | Task 0a2e30d7-dda3-46da-954b-b9e274a08846 (facts) was prepared for execution. 2026-04-11 04:09:14.749589 | orchestrator | 2026-04-11 04:09:14 | INFO  | It takes a moment until task 0a2e30d7-dda3-46da-954b-b9e274a08846 (facts) has been started and output is visible here. 2026-04-11 04:09:30.607917 | orchestrator | 2026-04-11 04:09:30.608098 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-11 04:09:30.608117 | orchestrator | 2026-04-11 04:09:30.608126 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 04:09:30.608134 | orchestrator | Saturday 11 April 2026 04:09:19 +0000 (0:00:00.329) 0:00:00.329 ******** 2026-04-11 04:09:30.608142 | orchestrator | ok: [testbed-manager] 2026-04-11 04:09:30.608151 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:09:30.608159 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:09:30.608166 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:09:30.608172 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:09:30.608199 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:09:30.608222 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:09:30.608230 | orchestrator | 2026-04-11 04:09:30.608237 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 04:09:30.608243 | orchestrator | Saturday 11 April 2026 04:09:21 +0000 (0:00:01.371) 0:00:01.701 ******** 2026-04-11 04:09:30.608250 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:09:30.608257 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:09:30.608270 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:09:30.608277 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:09:30.608283 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:09:30.608290 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:09:30.608296 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:09:30.608303 | orchestrator | 2026-04-11 04:09:30.608309 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 04:09:30.608315 | orchestrator | 2026-04-11 04:09:30.608321 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 04:09:30.608327 | orchestrator | Saturday 11 April 2026 04:09:22 +0000 (0:00:01.567) 0:00:03.268 ******** 2026-04-11 04:09:30.608334 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:09:30.608341 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:09:30.608348 | orchestrator | ok: [testbed-manager] 2026-04-11 04:09:30.608354 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:09:30.608362 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:09:30.608369 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:09:30.608375 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:09:30.608380 | orchestrator | 2026-04-11 04:09:30.608386 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 04:09:30.608392 | orchestrator | 2026-04-11 04:09:30.608397 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 04:09:30.608403 | orchestrator | Saturday 11 April 2026 04:09:29 +0000 (0:00:06.742) 0:00:10.011 ******** 2026-04-11 04:09:30.608409 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:09:30.608415 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:09:30.608423 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:09:30.608430 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:09:30.608437 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:09:30.608443 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:09:30.608449 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:09:30.608455 | orchestrator | 2026-04-11 04:09:30.608461 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:09:30.608468 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:09:30.608519 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:09:30.608526 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:09:30.608532 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:09:30.608537 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:09:30.608541 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:09:30.608546 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:09:30.608551 | orchestrator | 2026-04-11 04:09:30.608555 | orchestrator | 2026-04-11 04:09:30.608560 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:09:30.608564 | orchestrator | Saturday 11 April 2026 04:09:30 +0000 (0:00:00.597) 0:00:10.609 ******** 2026-04-11 04:09:30.608580 | orchestrator | =============================================================================== 2026-04-11 04:09:30.608585 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.74s 2026-04-11 04:09:30.608589 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.57s 2026-04-11 04:09:30.608594 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.37s 2026-04-11 04:09:30.608598 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-04-11 04:09:30.993982 | orchestrator | + osism validate ceph-mons 2026-04-11 04:10:05.779707 | orchestrator | 2026-04-11 04:10:05.779804 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-11 04:10:05.779817 | orchestrator | 2026-04-11 04:10:05.779824 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-11 04:10:05.779845 | orchestrator | Saturday 11 April 2026 04:09:48 +0000 (0:00:00.511) 0:00:00.511 ******** 2026-04-11 04:10:05.779854 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:05.779860 | orchestrator | 2026-04-11 04:10:05.779866 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-11 04:10:05.779873 | orchestrator | Saturday 11 April 2026 04:09:49 +0000 (0:00:00.949) 0:00:01.461 ******** 2026-04-11 04:10:05.779879 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:05.779885 | orchestrator | 2026-04-11 04:10:05.779891 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-11 04:10:05.779898 | orchestrator | Saturday 11 April 2026 04:09:50 +0000 (0:00:01.220) 0:00:02.681 ******** 2026-04-11 04:10:05.779903 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.779911 | orchestrator | 2026-04-11 04:10:05.779918 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-11 04:10:05.779924 | orchestrator | Saturday 11 April 2026 04:09:50 +0000 (0:00:00.132) 0:00:02.814 ******** 2026-04-11 04:10:05.779931 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.779937 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:05.779944 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:05.779951 | orchestrator | 2026-04-11 04:10:05.779957 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-11 04:10:05.779963 | orchestrator | Saturday 11 April 2026 04:09:51 +0000 (0:00:00.346) 0:00:03.160 ******** 2026-04-11 04:10:05.779970 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:05.779975 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:05.779982 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.779988 | orchestrator | 2026-04-11 04:10:05.779995 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-11 04:10:05.780001 | orchestrator | Saturday 11 April 2026 04:09:52 +0000 (0:00:01.030) 0:00:04.190 ******** 2026-04-11 04:10:05.780008 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780014 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:10:05.780020 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:10:05.780027 | orchestrator | 2026-04-11 04:10:05.780033 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-11 04:10:05.780039 | orchestrator | Saturday 11 April 2026 04:09:52 +0000 (0:00:00.321) 0:00:04.511 ******** 2026-04-11 04:10:05.780045 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780052 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:05.780058 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:05.780064 | orchestrator | 2026-04-11 04:10:05.780070 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 04:10:05.780075 | orchestrator | Saturday 11 April 2026 04:09:53 +0000 (0:00:00.541) 0:00:05.053 ******** 2026-04-11 04:10:05.780082 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780088 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:05.780094 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:05.780100 | orchestrator | 2026-04-11 04:10:05.780106 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-11 04:10:05.780134 | orchestrator | Saturday 11 April 2026 04:09:53 +0000 (0:00:00.345) 0:00:05.399 ******** 2026-04-11 04:10:05.780141 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780147 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:10:05.780152 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:10:05.780158 | orchestrator | 2026-04-11 04:10:05.780163 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-11 04:10:05.780169 | orchestrator | Saturday 11 April 2026 04:09:53 +0000 (0:00:00.342) 0:00:05.742 ******** 2026-04-11 04:10:05.780175 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780181 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:05.780187 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:05.780192 | orchestrator | 2026-04-11 04:10:05.780204 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 04:10:05.780210 | orchestrator | Saturday 11 April 2026 04:09:54 +0000 (0:00:00.566) 0:00:06.308 ******** 2026-04-11 04:10:05.780216 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780222 | orchestrator | 2026-04-11 04:10:05.780228 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 04:10:05.780234 | orchestrator | Saturday 11 April 2026 04:09:54 +0000 (0:00:00.267) 0:00:06.576 ******** 2026-04-11 04:10:05.780239 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780245 | orchestrator | 2026-04-11 04:10:05.780250 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 04:10:05.780256 | orchestrator | Saturday 11 April 2026 04:09:55 +0000 (0:00:00.299) 0:00:06.875 ******** 2026-04-11 04:10:05.780262 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780268 | orchestrator | 2026-04-11 04:10:05.780275 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:05.780282 | orchestrator | Saturday 11 April 2026 04:09:55 +0000 (0:00:00.302) 0:00:07.177 ******** 2026-04-11 04:10:05.780288 | orchestrator | 2026-04-11 04:10:05.780294 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:05.780300 | orchestrator | Saturday 11 April 2026 04:09:55 +0000 (0:00:00.094) 0:00:07.272 ******** 2026-04-11 04:10:05.780307 | orchestrator | 2026-04-11 04:10:05.780313 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:05.780319 | orchestrator | Saturday 11 April 2026 04:09:55 +0000 (0:00:00.087) 0:00:07.360 ******** 2026-04-11 04:10:05.780325 | orchestrator | 2026-04-11 04:10:05.780330 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 04:10:05.780336 | orchestrator | Saturday 11 April 2026 04:09:55 +0000 (0:00:00.081) 0:00:07.441 ******** 2026-04-11 04:10:05.780342 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780347 | orchestrator | 2026-04-11 04:10:05.780353 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-11 04:10:05.780360 | orchestrator | Saturday 11 April 2026 04:09:55 +0000 (0:00:00.268) 0:00:07.710 ******** 2026-04-11 04:10:05.780367 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780372 | orchestrator | 2026-04-11 04:10:05.780398 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-11 04:10:05.780405 | orchestrator | Saturday 11 April 2026 04:09:56 +0000 (0:00:00.272) 0:00:07.982 ******** 2026-04-11 04:10:05.780412 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780419 | orchestrator | 2026-04-11 04:10:05.780426 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-11 04:10:05.780432 | orchestrator | Saturday 11 April 2026 04:09:56 +0000 (0:00:00.119) 0:00:08.102 ******** 2026-04-11 04:10:05.780438 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:10:05.780444 | orchestrator | 2026-04-11 04:10:05.780453 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-11 04:10:05.780459 | orchestrator | Saturday 11 April 2026 04:09:57 +0000 (0:00:01.669) 0:00:09.771 ******** 2026-04-11 04:10:05.780491 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780497 | orchestrator | 2026-04-11 04:10:05.780512 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-11 04:10:05.780519 | orchestrator | Saturday 11 April 2026 04:09:58 +0000 (0:00:00.590) 0:00:10.362 ******** 2026-04-11 04:10:05.780525 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780531 | orchestrator | 2026-04-11 04:10:05.780538 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-11 04:10:05.780544 | orchestrator | Saturday 11 April 2026 04:09:58 +0000 (0:00:00.140) 0:00:10.502 ******** 2026-04-11 04:10:05.780550 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780557 | orchestrator | 2026-04-11 04:10:05.780562 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-11 04:10:05.780569 | orchestrator | Saturday 11 April 2026 04:09:59 +0000 (0:00:00.366) 0:00:10.869 ******** 2026-04-11 04:10:05.780576 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780582 | orchestrator | 2026-04-11 04:10:05.780588 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-11 04:10:05.780595 | orchestrator | Saturday 11 April 2026 04:09:59 +0000 (0:00:00.367) 0:00:11.237 ******** 2026-04-11 04:10:05.780602 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780608 | orchestrator | 2026-04-11 04:10:05.780615 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-11 04:10:05.780621 | orchestrator | Saturday 11 April 2026 04:09:59 +0000 (0:00:00.121) 0:00:11.358 ******** 2026-04-11 04:10:05.780628 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780634 | orchestrator | 2026-04-11 04:10:05.780640 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-11 04:10:05.780646 | orchestrator | Saturday 11 April 2026 04:09:59 +0000 (0:00:00.135) 0:00:11.493 ******** 2026-04-11 04:10:05.780652 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780659 | orchestrator | 2026-04-11 04:10:05.780665 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-11 04:10:05.780672 | orchestrator | Saturday 11 April 2026 04:09:59 +0000 (0:00:00.133) 0:00:11.627 ******** 2026-04-11 04:10:05.780679 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:10:05.780685 | orchestrator | 2026-04-11 04:10:05.780691 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-11 04:10:05.780697 | orchestrator | Saturday 11 April 2026 04:10:01 +0000 (0:00:01.335) 0:00:12.962 ******** 2026-04-11 04:10:05.780704 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780710 | orchestrator | 2026-04-11 04:10:05.780716 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-11 04:10:05.780723 | orchestrator | Saturday 11 April 2026 04:10:01 +0000 (0:00:00.335) 0:00:13.297 ******** 2026-04-11 04:10:05.780729 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780735 | orchestrator | 2026-04-11 04:10:05.780741 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-11 04:10:05.780747 | orchestrator | Saturday 11 April 2026 04:10:01 +0000 (0:00:00.143) 0:00:13.441 ******** 2026-04-11 04:10:05.780751 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:05.780754 | orchestrator | 2026-04-11 04:10:05.780758 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-11 04:10:05.780768 | orchestrator | Saturday 11 April 2026 04:10:01 +0000 (0:00:00.159) 0:00:13.601 ******** 2026-04-11 04:10:05.780771 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780775 | orchestrator | 2026-04-11 04:10:05.780779 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-11 04:10:05.780783 | orchestrator | Saturday 11 April 2026 04:10:01 +0000 (0:00:00.139) 0:00:13.740 ******** 2026-04-11 04:10:05.780788 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780794 | orchestrator | 2026-04-11 04:10:05.780801 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-11 04:10:05.780807 | orchestrator | Saturday 11 April 2026 04:10:02 +0000 (0:00:00.389) 0:00:14.130 ******** 2026-04-11 04:10:05.780813 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:05.780827 | orchestrator | 2026-04-11 04:10:05.780833 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-11 04:10:05.780839 | orchestrator | Saturday 11 April 2026 04:10:02 +0000 (0:00:00.280) 0:00:14.410 ******** 2026-04-11 04:10:05.780845 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:05.780852 | orchestrator | 2026-04-11 04:10:05.780858 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 04:10:05.780864 | orchestrator | Saturday 11 April 2026 04:10:02 +0000 (0:00:00.283) 0:00:14.694 ******** 2026-04-11 04:10:05.780871 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:05.780877 | orchestrator | 2026-04-11 04:10:05.780883 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 04:10:05.780889 | orchestrator | Saturday 11 April 2026 04:10:04 +0000 (0:00:02.058) 0:00:16.752 ******** 2026-04-11 04:10:05.780895 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:05.780901 | orchestrator | 2026-04-11 04:10:05.780908 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 04:10:05.780914 | orchestrator | Saturday 11 April 2026 04:10:05 +0000 (0:00:00.318) 0:00:17.071 ******** 2026-04-11 04:10:05.780920 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:05.780926 | orchestrator | 2026-04-11 04:10:05.780940 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:08.403805 | orchestrator | Saturday 11 April 2026 04:10:05 +0000 (0:00:00.308) 0:00:17.379 ******** 2026-04-11 04:10:08.403860 | orchestrator | 2026-04-11 04:10:08.403867 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:08.403872 | orchestrator | Saturday 11 April 2026 04:10:05 +0000 (0:00:00.072) 0:00:17.451 ******** 2026-04-11 04:10:08.403876 | orchestrator | 2026-04-11 04:10:08.403881 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:08.403886 | orchestrator | Saturday 11 April 2026 04:10:05 +0000 (0:00:00.072) 0:00:17.523 ******** 2026-04-11 04:10:08.403891 | orchestrator | 2026-04-11 04:10:08.403895 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-11 04:10:08.403899 | orchestrator | Saturday 11 April 2026 04:10:05 +0000 (0:00:00.075) 0:00:17.599 ******** 2026-04-11 04:10:08.403904 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:08.403908 | orchestrator | 2026-04-11 04:10:08.403912 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 04:10:08.403917 | orchestrator | Saturday 11 April 2026 04:10:07 +0000 (0:00:01.634) 0:00:19.233 ******** 2026-04-11 04:10:08.403921 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-11 04:10:08.403926 | orchestrator |  "msg": [ 2026-04-11 04:10:08.403931 | orchestrator |  "Validator run completed.", 2026-04-11 04:10:08.403935 | orchestrator |  "You can find the report file here:", 2026-04-11 04:10:08.403940 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-11T04:09:49+00:00-report.json", 2026-04-11 04:10:08.403944 | orchestrator |  "on the following host:", 2026-04-11 04:10:08.403949 | orchestrator |  "testbed-manager" 2026-04-11 04:10:08.403953 | orchestrator |  ] 2026-04-11 04:10:08.403958 | orchestrator | } 2026-04-11 04:10:08.403962 | orchestrator | 2026-04-11 04:10:08.403966 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:10:08.403971 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-11 04:10:08.403976 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:10:08.403981 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:10:08.403986 | orchestrator | 2026-04-11 04:10:08.404003 | orchestrator | 2026-04-11 04:10:08.404008 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:10:08.404013 | orchestrator | Saturday 11 April 2026 04:10:08 +0000 (0:00:00.739) 0:00:19.972 ******** 2026-04-11 04:10:08.404021 | orchestrator | =============================================================================== 2026-04-11 04:10:08.404029 | orchestrator | Aggregate test results step one ----------------------------------------- 2.06s 2026-04-11 04:10:08.404037 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.67s 2026-04-11 04:10:08.404045 | orchestrator | Write report file ------------------------------------------------------- 1.63s 2026-04-11 04:10:08.404051 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2026-04-11 04:10:08.404058 | orchestrator | Create report output directory ------------------------------------------ 1.22s 2026-04-11 04:10:08.404065 | orchestrator | Get container info ------------------------------------------------------ 1.03s 2026-04-11 04:10:08.404072 | orchestrator | Get timestamp for report file ------------------------------------------- 0.95s 2026-04-11 04:10:08.404080 | orchestrator | Print report file information ------------------------------------------- 0.74s 2026-04-11 04:10:08.404087 | orchestrator | Set quorum test data ---------------------------------------------------- 0.59s 2026-04-11 04:10:08.404096 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.57s 2026-04-11 04:10:08.404104 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-04-11 04:10:08.404111 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.39s 2026-04-11 04:10:08.404119 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.37s 2026-04-11 04:10:08.404127 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.37s 2026-04-11 04:10:08.404134 | orchestrator | Prepare test data for container existance test -------------------------- 0.35s 2026-04-11 04:10:08.404142 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2026-04-11 04:10:08.404149 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.34s 2026-04-11 04:10:08.404157 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2026-04-11 04:10:08.404165 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-04-11 04:10:08.404172 | orchestrator | Aggregate test results step two ----------------------------------------- 0.32s 2026-04-11 04:10:08.673138 | orchestrator | + osism validate ceph-mgrs 2026-04-11 04:10:41.679876 | orchestrator | 2026-04-11 04:10:41.680018 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-11 04:10:41.680037 | orchestrator | 2026-04-11 04:10:41.680049 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-11 04:10:41.680061 | orchestrator | Saturday 11 April 2026 04:10:26 +0000 (0:00:00.517) 0:00:00.517 ******** 2026-04-11 04:10:41.680073 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:41.680084 | orchestrator | 2026-04-11 04:10:41.680095 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-11 04:10:41.680106 | orchestrator | Saturday 11 April 2026 04:10:26 +0000 (0:00:00.908) 0:00:01.425 ******** 2026-04-11 04:10:41.680138 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:41.680150 | orchestrator | 2026-04-11 04:10:41.680169 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-11 04:10:41.680195 | orchestrator | Saturday 11 April 2026 04:10:28 +0000 (0:00:01.091) 0:00:02.517 ******** 2026-04-11 04:10:41.680218 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.680238 | orchestrator | 2026-04-11 04:10:41.680255 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-11 04:10:41.680273 | orchestrator | Saturday 11 April 2026 04:10:28 +0000 (0:00:00.157) 0:00:02.675 ******** 2026-04-11 04:10:41.680289 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.680306 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:41.680357 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:41.680378 | orchestrator | 2026-04-11 04:10:41.680397 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-11 04:10:41.680415 | orchestrator | Saturday 11 April 2026 04:10:28 +0000 (0:00:00.356) 0:00:03.031 ******** 2026-04-11 04:10:41.680435 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:41.680455 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.680502 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:41.680516 | orchestrator | 2026-04-11 04:10:41.680528 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-11 04:10:41.680541 | orchestrator | Saturday 11 April 2026 04:10:29 +0000 (0:00:01.050) 0:00:04.082 ******** 2026-04-11 04:10:41.680553 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.680566 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:10:41.680578 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:10:41.680590 | orchestrator | 2026-04-11 04:10:41.680602 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-11 04:10:41.680614 | orchestrator | Saturday 11 April 2026 04:10:29 +0000 (0:00:00.353) 0:00:04.436 ******** 2026-04-11 04:10:41.680626 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.680639 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:41.680651 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:41.680663 | orchestrator | 2026-04-11 04:10:41.680675 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 04:10:41.680687 | orchestrator | Saturday 11 April 2026 04:10:30 +0000 (0:00:00.561) 0:00:04.997 ******** 2026-04-11 04:10:41.680699 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.680710 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:41.680722 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:41.680733 | orchestrator | 2026-04-11 04:10:41.680743 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-11 04:10:41.680754 | orchestrator | Saturday 11 April 2026 04:10:30 +0000 (0:00:00.333) 0:00:05.331 ******** 2026-04-11 04:10:41.680764 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.680775 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:10:41.680786 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:10:41.680797 | orchestrator | 2026-04-11 04:10:41.680807 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-11 04:10:41.680818 | orchestrator | Saturday 11 April 2026 04:10:31 +0000 (0:00:00.338) 0:00:05.670 ******** 2026-04-11 04:10:41.680828 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.680839 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:10:41.680850 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:10:41.680860 | orchestrator | 2026-04-11 04:10:41.680871 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 04:10:41.680881 | orchestrator | Saturday 11 April 2026 04:10:31 +0000 (0:00:00.559) 0:00:06.229 ******** 2026-04-11 04:10:41.680892 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.680902 | orchestrator | 2026-04-11 04:10:41.680913 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 04:10:41.680924 | orchestrator | Saturday 11 April 2026 04:10:32 +0000 (0:00:00.289) 0:00:06.519 ******** 2026-04-11 04:10:41.680934 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.680945 | orchestrator | 2026-04-11 04:10:41.680956 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 04:10:41.680975 | orchestrator | Saturday 11 April 2026 04:10:32 +0000 (0:00:00.277) 0:00:06.797 ******** 2026-04-11 04:10:41.680986 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.680997 | orchestrator | 2026-04-11 04:10:41.681007 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:41.681018 | orchestrator | Saturday 11 April 2026 04:10:32 +0000 (0:00:00.265) 0:00:07.062 ******** 2026-04-11 04:10:41.681028 | orchestrator | 2026-04-11 04:10:41.681039 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:41.681049 | orchestrator | Saturday 11 April 2026 04:10:32 +0000 (0:00:00.097) 0:00:07.159 ******** 2026-04-11 04:10:41.681074 | orchestrator | 2026-04-11 04:10:41.681091 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:41.681110 | orchestrator | Saturday 11 April 2026 04:10:32 +0000 (0:00:00.073) 0:00:07.233 ******** 2026-04-11 04:10:41.681137 | orchestrator | 2026-04-11 04:10:41.681157 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 04:10:41.681174 | orchestrator | Saturday 11 April 2026 04:10:32 +0000 (0:00:00.083) 0:00:07.316 ******** 2026-04-11 04:10:41.681191 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.681208 | orchestrator | 2026-04-11 04:10:41.681225 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-11 04:10:41.681325 | orchestrator | Saturday 11 April 2026 04:10:33 +0000 (0:00:00.315) 0:00:07.632 ******** 2026-04-11 04:10:41.681348 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.681366 | orchestrator | 2026-04-11 04:10:41.681413 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-11 04:10:41.681433 | orchestrator | Saturday 11 April 2026 04:10:33 +0000 (0:00:00.253) 0:00:07.886 ******** 2026-04-11 04:10:41.681451 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.681693 | orchestrator | 2026-04-11 04:10:41.681714 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-11 04:10:41.681726 | orchestrator | Saturday 11 April 2026 04:10:33 +0000 (0:00:00.130) 0:00:08.016 ******** 2026-04-11 04:10:41.681737 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:10:41.681747 | orchestrator | 2026-04-11 04:10:41.681758 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-11 04:10:41.681768 | orchestrator | Saturday 11 April 2026 04:10:35 +0000 (0:00:02.010) 0:00:10.027 ******** 2026-04-11 04:10:41.681779 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.681789 | orchestrator | 2026-04-11 04:10:41.681800 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-11 04:10:41.681810 | orchestrator | Saturday 11 April 2026 04:10:35 +0000 (0:00:00.475) 0:00:10.502 ******** 2026-04-11 04:10:41.681821 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.681831 | orchestrator | 2026-04-11 04:10:41.681841 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-11 04:10:41.681852 | orchestrator | Saturday 11 April 2026 04:10:36 +0000 (0:00:00.327) 0:00:10.830 ******** 2026-04-11 04:10:41.681862 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.681873 | orchestrator | 2026-04-11 04:10:41.681883 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-11 04:10:41.681894 | orchestrator | Saturday 11 April 2026 04:10:36 +0000 (0:00:00.163) 0:00:10.993 ******** 2026-04-11 04:10:41.681904 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:10:41.681914 | orchestrator | 2026-04-11 04:10:41.681925 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-11 04:10:41.681935 | orchestrator | Saturday 11 April 2026 04:10:36 +0000 (0:00:00.163) 0:00:11.157 ******** 2026-04-11 04:10:41.681946 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:41.681957 | orchestrator | 2026-04-11 04:10:41.681967 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-11 04:10:41.681977 | orchestrator | Saturday 11 April 2026 04:10:36 +0000 (0:00:00.291) 0:00:11.448 ******** 2026-04-11 04:10:41.681988 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:10:41.681998 | orchestrator | 2026-04-11 04:10:41.682009 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 04:10:41.682089 | orchestrator | Saturday 11 April 2026 04:10:37 +0000 (0:00:00.291) 0:00:11.740 ******** 2026-04-11 04:10:41.682103 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:41.682112 | orchestrator | 2026-04-11 04:10:41.682122 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 04:10:41.682132 | orchestrator | Saturday 11 April 2026 04:10:38 +0000 (0:00:01.463) 0:00:13.203 ******** 2026-04-11 04:10:41.682141 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:41.682165 | orchestrator | 2026-04-11 04:10:41.682174 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 04:10:41.682184 | orchestrator | Saturday 11 April 2026 04:10:38 +0000 (0:00:00.277) 0:00:13.481 ******** 2026-04-11 04:10:41.682193 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:41.682203 | orchestrator | 2026-04-11 04:10:41.682212 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:41.682222 | orchestrator | Saturday 11 April 2026 04:10:39 +0000 (0:00:00.298) 0:00:13.780 ******** 2026-04-11 04:10:41.682231 | orchestrator | 2026-04-11 04:10:41.682240 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:41.682250 | orchestrator | Saturday 11 April 2026 04:10:39 +0000 (0:00:00.075) 0:00:13.855 ******** 2026-04-11 04:10:41.682259 | orchestrator | 2026-04-11 04:10:41.682269 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:10:41.682278 | orchestrator | Saturday 11 April 2026 04:10:39 +0000 (0:00:00.072) 0:00:13.928 ******** 2026-04-11 04:10:41.682288 | orchestrator | 2026-04-11 04:10:41.682297 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-11 04:10:41.682307 | orchestrator | Saturday 11 April 2026 04:10:39 +0000 (0:00:00.318) 0:00:14.246 ******** 2026-04-11 04:10:41.682316 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 04:10:41.682325 | orchestrator | 2026-04-11 04:10:41.682343 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 04:10:41.682353 | orchestrator | Saturday 11 April 2026 04:10:41 +0000 (0:00:01.460) 0:00:15.706 ******** 2026-04-11 04:10:41.682362 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-11 04:10:41.682372 | orchestrator |  "msg": [ 2026-04-11 04:10:41.682382 | orchestrator |  "Validator run completed.", 2026-04-11 04:10:41.682391 | orchestrator |  "You can find the report file here:", 2026-04-11 04:10:41.682401 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-11T04:10:26+00:00-report.json", 2026-04-11 04:10:41.682411 | orchestrator |  "on the following host:", 2026-04-11 04:10:41.682421 | orchestrator |  "testbed-manager" 2026-04-11 04:10:41.682430 | orchestrator |  ] 2026-04-11 04:10:41.682440 | orchestrator | } 2026-04-11 04:10:41.682450 | orchestrator | 2026-04-11 04:10:41.682488 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:10:41.682507 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 04:10:41.682521 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:10:41.682548 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:10:42.153288 | orchestrator | 2026-04-11 04:10:42.153368 | orchestrator | 2026-04-11 04:10:42.153381 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:10:42.153394 | orchestrator | Saturday 11 April 2026 04:10:41 +0000 (0:00:00.461) 0:00:16.168 ******** 2026-04-11 04:10:42.153405 | orchestrator | =============================================================================== 2026-04-11 04:10:42.153415 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.01s 2026-04-11 04:10:42.153425 | orchestrator | Aggregate test results step one ----------------------------------------- 1.46s 2026-04-11 04:10:42.153436 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2026-04-11 04:10:42.153445 | orchestrator | Create report output directory ------------------------------------------ 1.09s 2026-04-11 04:10:42.153455 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2026-04-11 04:10:42.153515 | orchestrator | Get timestamp for report file ------------------------------------------- 0.91s 2026-04-11 04:10:42.153553 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-04-11 04:10:42.153564 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.56s 2026-04-11 04:10:42.153575 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.48s 2026-04-11 04:10:42.153586 | orchestrator | Flush handlers ---------------------------------------------------------- 0.47s 2026-04-11 04:10:42.153594 | orchestrator | Print report file information ------------------------------------------- 0.46s 2026-04-11 04:10:42.153600 | orchestrator | Prepare test data for container existance test -------------------------- 0.36s 2026-04-11 04:10:42.153607 | orchestrator | Set test result to failed if container is missing ----------------------- 0.35s 2026-04-11 04:10:42.153613 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.34s 2026-04-11 04:10:42.153619 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-04-11 04:10:42.153625 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2026-04-11 04:10:42.153634 | orchestrator | Print report file information ------------------------------------------- 0.32s 2026-04-11 04:10:42.153645 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2026-04-11 04:10:42.153654 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.29s 2026-04-11 04:10:42.153664 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.29s 2026-04-11 04:10:42.536669 | orchestrator | + osism validate ceph-osds 2026-04-11 04:11:05.406697 | orchestrator | 2026-04-11 04:11:05.406810 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-11 04:11:05.406829 | orchestrator | 2026-04-11 04:11:05.406838 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-11 04:11:05.406848 | orchestrator | Saturday 11 April 2026 04:11:00 +0000 (0:00:00.521) 0:00:00.521 ******** 2026-04-11 04:11:05.406858 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 04:11:05.406867 | orchestrator | 2026-04-11 04:11:05.406877 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 04:11:05.406886 | orchestrator | Saturday 11 April 2026 04:11:01 +0000 (0:00:00.906) 0:00:01.428 ******** 2026-04-11 04:11:05.406896 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 04:11:05.406905 | orchestrator | 2026-04-11 04:11:05.406915 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-11 04:11:05.406924 | orchestrator | Saturday 11 April 2026 04:11:01 +0000 (0:00:00.601) 0:00:02.030 ******** 2026-04-11 04:11:05.406934 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 04:11:05.406944 | orchestrator | 2026-04-11 04:11:05.406980 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-11 04:11:05.406991 | orchestrator | Saturday 11 April 2026 04:11:02 +0000 (0:00:00.816) 0:00:02.846 ******** 2026-04-11 04:11:05.407000 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:05.407010 | orchestrator | 2026-04-11 04:11:05.407020 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-11 04:11:05.407030 | orchestrator | Saturday 11 April 2026 04:11:02 +0000 (0:00:00.156) 0:00:03.003 ******** 2026-04-11 04:11:05.407040 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:05.407049 | orchestrator | 2026-04-11 04:11:05.407058 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-11 04:11:05.407068 | orchestrator | Saturday 11 April 2026 04:11:03 +0000 (0:00:00.156) 0:00:03.159 ******** 2026-04-11 04:11:05.407076 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:05.407085 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:05.407095 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:05.407104 | orchestrator | 2026-04-11 04:11:05.407113 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-11 04:11:05.407123 | orchestrator | Saturday 11 April 2026 04:11:03 +0000 (0:00:00.336) 0:00:03.496 ******** 2026-04-11 04:11:05.407160 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:05.407170 | orchestrator | 2026-04-11 04:11:05.407179 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-11 04:11:05.407189 | orchestrator | Saturday 11 April 2026 04:11:03 +0000 (0:00:00.194) 0:00:03.691 ******** 2026-04-11 04:11:05.407198 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:05.407207 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:05.407216 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:05.407225 | orchestrator | 2026-04-11 04:11:05.407313 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-11 04:11:05.407324 | orchestrator | Saturday 11 April 2026 04:11:03 +0000 (0:00:00.356) 0:00:04.047 ******** 2026-04-11 04:11:05.407334 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:05.407344 | orchestrator | 2026-04-11 04:11:05.407356 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 04:11:05.407367 | orchestrator | Saturday 11 April 2026 04:11:04 +0000 (0:00:00.846) 0:00:04.893 ******** 2026-04-11 04:11:05.407382 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:05.407397 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:05.407413 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:05.407428 | orchestrator | 2026-04-11 04:11:05.407442 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-11 04:11:05.407507 | orchestrator | Saturday 11 April 2026 04:11:05 +0000 (0:00:00.315) 0:00:05.209 ******** 2026-04-11 04:11:05.407526 | orchestrator | skipping: [testbed-node-3] => (item={'id': '108b97bddb43ee997bd72612fafbe07a1034bdb6edcc1e01fadc76b8f1e859a0', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-11 04:11:05.407543 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3dc0156ff105488ca65827b8419b7458cb9f09203253a4497f2e5dad036fda8d', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-11 04:11:05.407559 | orchestrator | skipping: [testbed-node-3] => (item={'id': '140880058a44a0919cb4be50e59649244ae501cf9af04eaf7e7967a15d9481ca', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-11 04:11:05.407571 | orchestrator | skipping: [testbed-node-3] => (item={'id': '335b4793fd2b2d80ba572b2f87b1db0b9373337c035b8fb7aff0d021153d56de', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-11 04:11:05.407580 | orchestrator | skipping: [testbed-node-3] => (item={'id': '31db13142a02c3736c572f2625f4d49b2ad9b5e21aa8c85f8bcf54a8c6e99fe8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-11 04:11:05.407662 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8893a2b22c2eb8d2e7e50d65c2b5504cc18bb233d97f4373569d2fa6f3877425', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-11 04:11:05.407674 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0d1784b594cdb88bf3a6241b0c32cf14b8acb542e6f4d4880819e3a5c73381c8', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-11 04:11:05.407683 | orchestrator | skipping: [testbed-node-3] => (item={'id': '57f3bf81cb59e6a9801f51b7e44cc81c79b0379f0f4e984746f8147baed45176', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-11 04:11:05.407705 | orchestrator | skipping: [testbed-node-3] => (item={'id': '12ae5d224c1a018bf657a65e30264493e822461380432fbc0eedf383fb1e15ec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.407720 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9be660d986f1529d7687a3a639d1f8d04b7480da7fe37f7ab82372a5e286044e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.407731 | orchestrator | skipping: [testbed-node-3] => (item={'id': '97a659a83fbf66c27887c9776487622ba14ef9cdf8434000d553b2373dda4f61', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.407742 | orchestrator | ok: [testbed-node-3] => (item={'id': '6c1f838026c973f9701dc7c41768e93326521d4cce84988f18c789b51bb18810', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-11 04:11:05.407753 | orchestrator | ok: [testbed-node-3] => (item={'id': '89e9bc0790e0b7b8dcbe8a8fa46e86010be8b8252969f6ff2bc352d393d4780d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-11 04:11:05.407762 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8cbcd8adfe0a323706c4759a1f49e7ee54b630ea3f8b227d0d1c8d800676beb7', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.407772 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5010a8ff963724fef897672f323af283c75e5bd87f7b3d163d19873d85c47421', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 04:11:05.407781 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f669ddf67f2917c7aeda31a38eac8eb03c980a553d99207bc4f61ce63c7fdc14', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 04:11:05.407791 | orchestrator | skipping: [testbed-node-3] => (item={'id': '397194b061f872d8247f16b39b423494e981e91e6ecf7b918f00ec4abebe5d85', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:05.407801 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a942ab2e6d8acaef313c83f0376a9b404a5ae432fb4df876a6dc1c7ecae536bf', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:05.407810 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3a48fa5fe74de85d0531bbd23e0e820295368d399dcc4ca7f9e2dcaf268e7441', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:05.407819 | orchestrator | skipping: [testbed-node-4] => (item={'id': '18feaede53b3e884a37840575aa1ad3080d75d768ff2fe56a344678376f55cb4', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-11 04:11:05.407837 | orchestrator | skipping: [testbed-node-4] => (item={'id': '06b448ca68e9cf466eabad8ef6f7daa463e575176ed107be7b14825e218f3ffb', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-11 04:11:05.711405 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ffebbf16b8c20e2117e3f9ba943760a6e59319d70b640760ce7b7aa0134a7463', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-11 04:11:05.711523 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cbf7f20d8cf2141c07e717e679dd1b6fd7a29f1ba3056fbab5d895dd8ea69640', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-11 04:11:05.711532 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a4ee72ea6c470371b86a24c115ae28a4bd3f8fd3169da6c2a7316c85704eb95', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-11 04:11:05.711550 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8b7ab528eb2d4db892a9282c0423c5bd2022686c33b9374920f5aa9eeb3e8abe', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-11 04:11:05.711554 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fcc33eb15018a8e038b164de55b6a7c99d399646a0977e3b27bbec4e413c1210', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-11 04:11:05.711558 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a7bc92ba9a70c5cd21fa911b65282a53f980dba928f2bf3b08feba02625d26a7', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-11 04:11:05.711572 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bca984263e877073b0b0a55539367a4ef997fe41e49aca1606a4bb160838618e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.711578 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'acae80a185742d35b32de31eb13a21a28cd9874a1c5789cb67a86b066b0173b0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.711582 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd225eacc4e622f796877e7a80c2025296f2a696d0f1d94ed6acc329bd62f6946', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.711594 | orchestrator | ok: [testbed-node-4] => (item={'id': 'a9ce0c86211012cbb9d0b0bfc75f0048d81f032166bcfc7d4dcfc2b771f29a4d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-11 04:11:05.711599 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c30cd3d1c0ab8de9195a1b656002fe04e3a0c66314162d04ddf67195a3352741', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-11 04:11:05.711603 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb099821e2f4f569a8cc95bcaf360549cfcec8800cf94c0aad0b53092906bfc7', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.711608 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4206c37d7a30c51cc0c38179bd59f0f8cd1ce5d11fe0c35dc926375814f70d8f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 04:11:05.711612 | orchestrator | skipping: [testbed-node-4] => (item={'id': '82f92f36aea1072fa50fb3fb75ef084abc40177fd7bac300165c12f9a2a032ac', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 04:11:05.711629 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9f0e8cf7f3f8d0241f03f94344401cc3c2139478fce2884aad5c34e1ded586cc', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:05.711634 | orchestrator | skipping: [testbed-node-4] => (item={'id': '40fe2a3d642f04342571c58c12f59bc4077de68ca37d0b65ada7601e77957502', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:05.711639 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c41691ab7dbda507fd53c5cfef0298af00dc10de99e471a57563c0fce93e88be', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:05.711643 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da6ae7a6dad605230496354f804286c2d71a2c4d99334ed1a3d67a77e003e11b', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-11 04:11:05.711650 | orchestrator | skipping: [testbed-node-5] => (item={'id': '472994a539b9ef2ffd0e41d78716435142666d4f5785572257fc7d170d40cec1', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-11 04:11:05.711654 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c52b524ad2b9a75478774f25afa970a39a3bc26d8876846173b79bea99354e3', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-11 04:11:05.711658 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f382e6bba4b7ab126ff3d03a9fa0918226e329c5af30dfa8f4562a21f1a90bec', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-11 04:11:05.711663 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b43a788043bf1db2fc2d31a730495dfb136551c290af9effd79aba04ae03c9c7', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-11 04:11:05.711667 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0004585c9a7cbb109624a5147c8a4e2007a82da5e1bbb44ba3bd48c8304cd7eb', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-11 04:11:05.711671 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fb56b36b66c9f6a629ec6347be063e5effe956e1f5cf09be4cbfe9a1d9862867', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-11 04:11:05.711675 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1c3f0ec4d8bf3418ec93f6b085ea909357442806d2c825ec2f1f120c0e88383c', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-11 04:11:05.711679 | orchestrator | skipping: [testbed-node-5] => (item={'id': '74b479dc7a5b9bcda729949152bbc15f39a66c17f0745051daef1e273e3b438c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.711683 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5c132bfb73de9d2860a400bea3a4601c932404dfd3840155bf976f05f23ad5f8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.711687 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2290fc6d92f2664ca4829419700f93cc7a06a729eda8fab6b05764739bf1cc95', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:05.711702 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b8116f26909604dd6c2d9073006dba1d8180d0dbec8ab48a92a7f6db092ffe1e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-11 04:11:05.711710 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b72561fcb777c69691e506da43da3df9f7917cc702119a3d01e42c68b2308d3f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-11 04:11:17.802538 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c10a06ec55b83e7307c63e10968802fbed0707aadd5756183ba9f3178445d716', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 04:11:17.802679 | orchestrator | skipping: [testbed-node-5] => (item={'id': '74ad326b48f6cbf7c64558d86858f739a8c8f7acfe553aced02daec2172bc775', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 04:11:17.802699 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da00b38f6861e15de797bced3f6e53b186e24a39dac424c4e3bf4667eb22f5d0', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 04:11:17.802713 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da2d784abe42bfe57927c0b5c2cdff3a50bd9b57eccb3bed7cf7e4f3cce34806', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:17.802727 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f75d6f422cf3535f00575c0e7bd32f52d82386ae17e58a3c4b1ccba1f69b9549', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:17.802739 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1dd183d18729483c5d64ee498175a25599b8ded79fdac3815386de8e8f31d37b', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 04:11:17.802750 | orchestrator | 2026-04-11 04:11:17.802763 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-11 04:11:17.802776 | orchestrator | Saturday 11 April 2026 04:11:05 +0000 (0:00:00.565) 0:00:05.774 ******** 2026-04-11 04:11:17.802787 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.802799 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:17.802810 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:17.802821 | orchestrator | 2026-04-11 04:11:17.802832 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-11 04:11:17.802843 | orchestrator | Saturday 11 April 2026 04:11:06 +0000 (0:00:00.352) 0:00:06.127 ******** 2026-04-11 04:11:17.802866 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.802879 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:17.802890 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:17.802901 | orchestrator | 2026-04-11 04:11:17.802913 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-11 04:11:17.802924 | orchestrator | Saturday 11 April 2026 04:11:06 +0000 (0:00:00.523) 0:00:06.651 ******** 2026-04-11 04:11:17.802935 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.802947 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:17.802958 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:17.802969 | orchestrator | 2026-04-11 04:11:17.802981 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 04:11:17.803017 | orchestrator | Saturday 11 April 2026 04:11:06 +0000 (0:00:00.351) 0:00:07.003 ******** 2026-04-11 04:11:17.803030 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.803043 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:17.803055 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:17.803068 | orchestrator | 2026-04-11 04:11:17.803081 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-11 04:11:17.803093 | orchestrator | Saturday 11 April 2026 04:11:07 +0000 (0:00:00.384) 0:00:07.388 ******** 2026-04-11 04:11:17.803124 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-11 04:11:17.803139 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-11 04:11:17.803152 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.803165 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-11 04:11:17.803177 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-11 04:11:17.803190 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:17.803203 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-11 04:11:17.803215 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-11 04:11:17.803228 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:17.803241 | orchestrator | 2026-04-11 04:11:17.803253 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-11 04:11:17.803271 | orchestrator | Saturday 11 April 2026 04:11:07 +0000 (0:00:00.340) 0:00:07.728 ******** 2026-04-11 04:11:17.803290 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.803309 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:17.803329 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:17.803348 | orchestrator | 2026-04-11 04:11:17.803365 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-11 04:11:17.803376 | orchestrator | Saturday 11 April 2026 04:11:08 +0000 (0:00:00.573) 0:00:08.302 ******** 2026-04-11 04:11:17.803387 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.803416 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:17.803428 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:17.803439 | orchestrator | 2026-04-11 04:11:17.803449 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-11 04:11:17.803489 | orchestrator | Saturday 11 April 2026 04:11:08 +0000 (0:00:00.355) 0:00:08.658 ******** 2026-04-11 04:11:17.803500 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.803510 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:17.803521 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:17.803531 | orchestrator | 2026-04-11 04:11:17.803542 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-11 04:11:17.803553 | orchestrator | Saturday 11 April 2026 04:11:08 +0000 (0:00:00.311) 0:00:08.969 ******** 2026-04-11 04:11:17.803563 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.803574 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:17.803586 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:17.803604 | orchestrator | 2026-04-11 04:11:17.803623 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 04:11:17.803640 | orchestrator | Saturday 11 April 2026 04:11:09 +0000 (0:00:00.336) 0:00:09.305 ******** 2026-04-11 04:11:17.803657 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.803676 | orchestrator | 2026-04-11 04:11:17.803699 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 04:11:17.803711 | orchestrator | Saturday 11 April 2026 04:11:09 +0000 (0:00:00.751) 0:00:10.057 ******** 2026-04-11 04:11:17.803721 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.803732 | orchestrator | 2026-04-11 04:11:17.803742 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 04:11:17.803763 | orchestrator | Saturday 11 April 2026 04:11:10 +0000 (0:00:00.277) 0:00:10.335 ******** 2026-04-11 04:11:17.803774 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.803784 | orchestrator | 2026-04-11 04:11:17.803795 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:11:17.803806 | orchestrator | Saturday 11 April 2026 04:11:10 +0000 (0:00:00.293) 0:00:10.628 ******** 2026-04-11 04:11:17.803816 | orchestrator | 2026-04-11 04:11:17.803827 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:11:17.803837 | orchestrator | Saturday 11 April 2026 04:11:10 +0000 (0:00:00.080) 0:00:10.709 ******** 2026-04-11 04:11:17.803848 | orchestrator | 2026-04-11 04:11:17.803859 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:11:17.803869 | orchestrator | Saturday 11 April 2026 04:11:10 +0000 (0:00:00.090) 0:00:10.800 ******** 2026-04-11 04:11:17.803880 | orchestrator | 2026-04-11 04:11:17.803891 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 04:11:17.803901 | orchestrator | Saturday 11 April 2026 04:11:10 +0000 (0:00:00.083) 0:00:10.883 ******** 2026-04-11 04:11:17.803911 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.803922 | orchestrator | 2026-04-11 04:11:17.803933 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-11 04:11:17.803943 | orchestrator | Saturday 11 April 2026 04:11:11 +0000 (0:00:00.267) 0:00:11.151 ******** 2026-04-11 04:11:17.803954 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.803964 | orchestrator | 2026-04-11 04:11:17.803975 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 04:11:17.803985 | orchestrator | Saturday 11 April 2026 04:11:11 +0000 (0:00:00.277) 0:00:11.429 ******** 2026-04-11 04:11:17.803995 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.804006 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:17.804017 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:17.804027 | orchestrator | 2026-04-11 04:11:17.804037 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-11 04:11:17.804048 | orchestrator | Saturday 11 April 2026 04:11:11 +0000 (0:00:00.328) 0:00:11.757 ******** 2026-04-11 04:11:17.804058 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.804069 | orchestrator | 2026-04-11 04:11:17.804079 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-11 04:11:17.804090 | orchestrator | Saturday 11 April 2026 04:11:12 +0000 (0:00:00.756) 0:00:12.514 ******** 2026-04-11 04:11:17.804100 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 04:11:17.804111 | orchestrator | 2026-04-11 04:11:17.804121 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-11 04:11:17.804132 | orchestrator | Saturday 11 April 2026 04:11:13 +0000 (0:00:01.565) 0:00:14.079 ******** 2026-04-11 04:11:17.804142 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.804152 | orchestrator | 2026-04-11 04:11:17.804163 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-11 04:11:17.804173 | orchestrator | Saturday 11 April 2026 04:11:14 +0000 (0:00:00.140) 0:00:14.219 ******** 2026-04-11 04:11:17.804184 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.804194 | orchestrator | 2026-04-11 04:11:17.804204 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-11 04:11:17.804215 | orchestrator | Saturday 11 April 2026 04:11:14 +0000 (0:00:00.332) 0:00:14.552 ******** 2026-04-11 04:11:17.804225 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:17.804236 | orchestrator | 2026-04-11 04:11:17.804247 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-11 04:11:17.804264 | orchestrator | Saturday 11 April 2026 04:11:14 +0000 (0:00:00.122) 0:00:14.675 ******** 2026-04-11 04:11:17.804282 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.804299 | orchestrator | 2026-04-11 04:11:17.804318 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 04:11:17.804337 | orchestrator | Saturday 11 April 2026 04:11:14 +0000 (0:00:00.140) 0:00:14.816 ******** 2026-04-11 04:11:17.804365 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:17.804381 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:17.804392 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:17.804402 | orchestrator | 2026-04-11 04:11:17.804413 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-11 04:11:17.804423 | orchestrator | Saturday 11 April 2026 04:11:15 +0000 (0:00:00.318) 0:00:15.135 ******** 2026-04-11 04:11:17.804434 | orchestrator | changed: [testbed-node-3] 2026-04-11 04:11:17.804445 | orchestrator | changed: [testbed-node-4] 2026-04-11 04:11:17.804538 | orchestrator | changed: [testbed-node-5] 2026-04-11 04:11:29.314567 | orchestrator | 2026-04-11 04:11:29.314667 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-11 04:11:29.314681 | orchestrator | Saturday 11 April 2026 04:11:17 +0000 (0:00:02.737) 0:00:17.872 ******** 2026-04-11 04:11:29.314690 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:29.314699 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:29.314707 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:29.314715 | orchestrator | 2026-04-11 04:11:29.314724 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-11 04:11:29.314732 | orchestrator | Saturday 11 April 2026 04:11:18 +0000 (0:00:00.358) 0:00:18.231 ******** 2026-04-11 04:11:29.314740 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:29.314748 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:29.314755 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:29.314763 | orchestrator | 2026-04-11 04:11:29.314771 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-11 04:11:29.314779 | orchestrator | Saturday 11 April 2026 04:11:18 +0000 (0:00:00.590) 0:00:18.822 ******** 2026-04-11 04:11:29.314787 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:29.314795 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:29.314803 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:29.314811 | orchestrator | 2026-04-11 04:11:29.314836 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-11 04:11:29.314844 | orchestrator | Saturday 11 April 2026 04:11:19 +0000 (0:00:00.368) 0:00:19.190 ******** 2026-04-11 04:11:29.314852 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:29.314860 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:29.314867 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:29.314875 | orchestrator | 2026-04-11 04:11:29.314883 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-11 04:11:29.314891 | orchestrator | Saturday 11 April 2026 04:11:19 +0000 (0:00:00.570) 0:00:19.761 ******** 2026-04-11 04:11:29.314898 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:29.314906 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:29.314916 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:29.314934 | orchestrator | 2026-04-11 04:11:29.314953 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-11 04:11:29.314968 | orchestrator | Saturday 11 April 2026 04:11:20 +0000 (0:00:00.327) 0:00:20.089 ******** 2026-04-11 04:11:29.314982 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:29.314996 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:29.315010 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:29.315024 | orchestrator | 2026-04-11 04:11:29.315037 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 04:11:29.315052 | orchestrator | Saturday 11 April 2026 04:11:20 +0000 (0:00:00.384) 0:00:20.473 ******** 2026-04-11 04:11:29.315067 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:29.315083 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:29.315099 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:29.315115 | orchestrator | 2026-04-11 04:11:29.315131 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-11 04:11:29.315140 | orchestrator | Saturday 11 April 2026 04:11:21 +0000 (0:00:00.633) 0:00:21.106 ******** 2026-04-11 04:11:29.315148 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:29.315178 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:29.315186 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:29.315194 | orchestrator | 2026-04-11 04:11:29.315201 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-11 04:11:29.315209 | orchestrator | Saturday 11 April 2026 04:11:21 +0000 (0:00:00.843) 0:00:21.949 ******** 2026-04-11 04:11:29.315217 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:29.315224 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:29.315232 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:29.315239 | orchestrator | 2026-04-11 04:11:29.315247 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-11 04:11:29.315255 | orchestrator | Saturday 11 April 2026 04:11:22 +0000 (0:00:00.375) 0:00:22.325 ******** 2026-04-11 04:11:29.315262 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:29.315270 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:11:29.315278 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:11:29.315285 | orchestrator | 2026-04-11 04:11:29.315293 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-11 04:11:29.315301 | orchestrator | Saturday 11 April 2026 04:11:22 +0000 (0:00:00.313) 0:00:22.639 ******** 2026-04-11 04:11:29.315309 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:11:29.315316 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:11:29.315324 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:11:29.315331 | orchestrator | 2026-04-11 04:11:29.315339 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-11 04:11:29.315347 | orchestrator | Saturday 11 April 2026 04:11:23 +0000 (0:00:00.594) 0:00:23.233 ******** 2026-04-11 04:11:29.315355 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 04:11:29.315362 | orchestrator | 2026-04-11 04:11:29.315370 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-11 04:11:29.315378 | orchestrator | Saturday 11 April 2026 04:11:23 +0000 (0:00:00.296) 0:00:23.529 ******** 2026-04-11 04:11:29.315385 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:11:29.315393 | orchestrator | 2026-04-11 04:11:29.315401 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 04:11:29.315408 | orchestrator | Saturday 11 April 2026 04:11:23 +0000 (0:00:00.270) 0:00:23.800 ******** 2026-04-11 04:11:29.315416 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 04:11:29.315424 | orchestrator | 2026-04-11 04:11:29.315431 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 04:11:29.315439 | orchestrator | Saturday 11 April 2026 04:11:25 +0000 (0:00:01.934) 0:00:25.734 ******** 2026-04-11 04:11:29.315446 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 04:11:29.315480 | orchestrator | 2026-04-11 04:11:29.315488 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 04:11:29.315496 | orchestrator | Saturday 11 April 2026 04:11:25 +0000 (0:00:00.283) 0:00:26.018 ******** 2026-04-11 04:11:29.315504 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 04:11:29.315512 | orchestrator | 2026-04-11 04:11:29.315537 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:11:29.315545 | orchestrator | Saturday 11 April 2026 04:11:26 +0000 (0:00:00.293) 0:00:26.312 ******** 2026-04-11 04:11:29.315553 | orchestrator | 2026-04-11 04:11:29.315561 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:11:29.315569 | orchestrator | Saturday 11 April 2026 04:11:26 +0000 (0:00:00.076) 0:00:26.389 ******** 2026-04-11 04:11:29.315577 | orchestrator | 2026-04-11 04:11:29.315584 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 04:11:29.315592 | orchestrator | Saturday 11 April 2026 04:11:26 +0000 (0:00:00.110) 0:00:26.499 ******** 2026-04-11 04:11:29.315600 | orchestrator | 2026-04-11 04:11:29.315608 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-11 04:11:29.315616 | orchestrator | Saturday 11 April 2026 04:11:26 +0000 (0:00:00.095) 0:00:26.594 ******** 2026-04-11 04:11:29.315631 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 04:11:29.315639 | orchestrator | 2026-04-11 04:11:29.315646 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 04:11:29.315654 | orchestrator | Saturday 11 April 2026 04:11:28 +0000 (0:00:01.737) 0:00:28.332 ******** 2026-04-11 04:11:29.315669 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-11 04:11:29.315677 | orchestrator |  "msg": [ 2026-04-11 04:11:29.315685 | orchestrator |  "Validator run completed.", 2026-04-11 04:11:29.315693 | orchestrator |  "You can find the report file here:", 2026-04-11 04:11:29.315701 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-11T04:11:01+00:00-report.json", 2026-04-11 04:11:29.315710 | orchestrator |  "on the following host:", 2026-04-11 04:11:29.315718 | orchestrator |  "testbed-manager" 2026-04-11 04:11:29.315726 | orchestrator |  ] 2026-04-11 04:11:29.315734 | orchestrator | } 2026-04-11 04:11:29.315742 | orchestrator | 2026-04-11 04:11:29.315750 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:11:29.315759 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 04:11:29.315768 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 04:11:29.315776 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 04:11:29.315784 | orchestrator | 2026-04-11 04:11:29.315792 | orchestrator | 2026-04-11 04:11:29.315800 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:11:29.315808 | orchestrator | Saturday 11 April 2026 04:11:28 +0000 (0:00:00.661) 0:00:28.994 ******** 2026-04-11 04:11:29.315816 | orchestrator | =============================================================================== 2026-04-11 04:11:29.315824 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.74s 2026-04-11 04:11:29.315832 | orchestrator | Aggregate test results step one ----------------------------------------- 1.93s 2026-04-11 04:11:29.315839 | orchestrator | Write report file ------------------------------------------------------- 1.74s 2026-04-11 04:11:29.315847 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.57s 2026-04-11 04:11:29.315855 | orchestrator | Get timestamp for report file ------------------------------------------- 0.91s 2026-04-11 04:11:29.315863 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.85s 2026-04-11 04:11:29.315871 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.84s 2026-04-11 04:11:29.315879 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2026-04-11 04:11:29.315886 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.76s 2026-04-11 04:11:29.315894 | orchestrator | Aggregate test results step one ----------------------------------------- 0.75s 2026-04-11 04:11:29.315902 | orchestrator | Print report file information ------------------------------------------- 0.66s 2026-04-11 04:11:29.315910 | orchestrator | Prepare test data ------------------------------------------------------- 0.63s 2026-04-11 04:11:29.315918 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.60s 2026-04-11 04:11:29.315925 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.59s 2026-04-11 04:11:29.315933 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.59s 2026-04-11 04:11:29.315941 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.57s 2026-04-11 04:11:29.315949 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.57s 2026-04-11 04:11:29.315957 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.57s 2026-04-11 04:11:29.315969 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.52s 2026-04-11 04:11:29.315978 | orchestrator | Prepare test data ------------------------------------------------------- 0.38s 2026-04-11 04:11:29.762122 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-11 04:11:29.770294 | orchestrator | + set -e 2026-04-11 04:11:29.770395 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 04:11:29.770414 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 04:11:29.770429 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 04:11:29.770443 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 04:11:29.770527 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 04:11:29.770543 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 04:11:29.770559 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 04:11:29.770573 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 04:11:29.770588 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 04:11:29.770602 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 04:11:29.770618 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 04:11:29.770632 | orchestrator | ++ export ARA=false 2026-04-11 04:11:29.770647 | orchestrator | ++ ARA=false 2026-04-11 04:11:29.770661 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 04:11:29.770675 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 04:11:29.770688 | orchestrator | ++ export TEMPEST=false 2026-04-11 04:11:29.770702 | orchestrator | ++ TEMPEST=false 2026-04-11 04:11:29.770717 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 04:11:29.770732 | orchestrator | ++ IS_ZUUL=true 2026-04-11 04:11:29.770746 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:11:29.770760 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:11:29.770775 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 04:11:29.770788 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 04:11:29.770802 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 04:11:29.770816 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 04:11:29.770830 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 04:11:29.770843 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 04:11:29.770856 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 04:11:29.770871 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 04:11:29.770886 | orchestrator | + source /etc/os-release 2026-04-11 04:11:29.770901 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-11 04:11:29.770916 | orchestrator | ++ NAME=Ubuntu 2026-04-11 04:11:29.770930 | orchestrator | ++ VERSION_ID=24.04 2026-04-11 04:11:29.770944 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-11 04:11:29.770958 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-11 04:11:29.770971 | orchestrator | ++ ID=ubuntu 2026-04-11 04:11:29.770985 | orchestrator | ++ ID_LIKE=debian 2026-04-11 04:11:29.770999 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-11 04:11:29.771015 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-11 04:11:29.771031 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-11 04:11:29.771047 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-11 04:11:29.771063 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-11 04:11:29.771075 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-11 04:11:29.771085 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-11 04:11:29.771094 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-11 04:11:29.771104 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-11 04:11:29.789030 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-11 04:11:53.991983 | orchestrator | 2026-04-11 04:11:53.992865 | orchestrator | # Status of Elasticsearch 2026-04-11 04:11:53.992903 | orchestrator | 2026-04-11 04:11:53.992913 | orchestrator | + pushd /opt/configuration/contrib 2026-04-11 04:11:53.992922 | orchestrator | + echo 2026-04-11 04:11:53.992931 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-11 04:11:53.992942 | orchestrator | + echo 2026-04-11 04:11:53.992955 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-11 04:11:54.160657 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-11 04:11:54.160749 | orchestrator | 2026-04-11 04:11:54.160759 | orchestrator | # Status of MariaDB 2026-04-11 04:11:54.160765 | orchestrator | 2026-04-11 04:11:54.160771 | orchestrator | + echo 2026-04-11 04:11:54.160776 | orchestrator | + echo '# Status of MariaDB' 2026-04-11 04:11:54.160781 | orchestrator | + echo 2026-04-11 04:11:54.161170 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-11 04:11:54.208020 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-11 04:11:54.208090 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-11 04:11:54.208097 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-11 04:11:54.208102 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-11 04:11:54.257614 | orchestrator | Reading package lists... 2026-04-11 04:11:54.658195 | orchestrator | Building dependency tree... 2026-04-11 04:11:54.658614 | orchestrator | Reading state information... 2026-04-11 04:11:55.165131 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-11 04:11:55.165235 | orchestrator | bc set to manually installed. 2026-04-11 04:11:55.165251 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2026-04-11 04:11:55.901669 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-11 04:11:55.901747 | orchestrator | 2026-04-11 04:11:55.901755 | orchestrator | # Status of Prometheus 2026-04-11 04:11:55.901760 | orchestrator | 2026-04-11 04:11:55.901765 | orchestrator | + echo 2026-04-11 04:11:55.901769 | orchestrator | + echo '# Status of Prometheus' 2026-04-11 04:11:55.901774 | orchestrator | + echo 2026-04-11 04:11:55.901779 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-11 04:11:55.974079 | orchestrator | Unauthorized 2026-04-11 04:11:55.979619 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-11 04:11:56.050564 | orchestrator | Unauthorized 2026-04-11 04:11:56.054511 | orchestrator | 2026-04-11 04:11:56.054573 | orchestrator | # Status of RabbitMQ 2026-04-11 04:11:56.054581 | orchestrator | 2026-04-11 04:11:56.054587 | orchestrator | + echo 2026-04-11 04:11:56.054592 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-11 04:11:56.054598 | orchestrator | + echo 2026-04-11 04:11:56.056297 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-11 04:11:56.120056 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-11 04:11:56.120124 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-11 04:11:56.120132 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-11 04:11:56.731873 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-11 04:11:56.741750 | orchestrator | 2026-04-11 04:11:56.741850 | orchestrator | # Status of Redis 2026-04-11 04:11:56.741867 | orchestrator | 2026-04-11 04:11:56.741882 | orchestrator | + echo 2026-04-11 04:11:56.741890 | orchestrator | + echo '# Status of Redis' 2026-04-11 04:11:56.741898 | orchestrator | + echo 2026-04-11 04:11:56.741907 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-11 04:11:56.748778 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002936s;;;0.000000;10.000000 2026-04-11 04:11:56.749627 | orchestrator | 2026-04-11 04:11:56.749659 | orchestrator | # Create backup of MariaDB database 2026-04-11 04:11:56.749671 | orchestrator | 2026-04-11 04:11:56.749681 | orchestrator | + popd 2026-04-11 04:11:56.749699 | orchestrator | + echo 2026-04-11 04:11:56.749714 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-11 04:11:56.749730 | orchestrator | + echo 2026-04-11 04:11:56.749744 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-11 04:11:59.093072 | orchestrator | 2026-04-11 04:11:59 | INFO  | Task ad8fa616-7876-451a-b8d5-1abed8a286dc (mariadb_backup) was prepared for execution. 2026-04-11 04:11:59.093928 | orchestrator | 2026-04-11 04:11:59 | INFO  | It takes a moment until task ad8fa616-7876-451a-b8d5-1abed8a286dc (mariadb_backup) has been started and output is visible here. 2026-04-11 04:14:05.056111 | orchestrator | 2026-04-11 04:14:05.056247 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 04:14:05.056263 | orchestrator | 2026-04-11 04:14:05.056294 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 04:14:05.056306 | orchestrator | Saturday 11 April 2026 04:12:03 +0000 (0:00:00.211) 0:00:00.211 ******** 2026-04-11 04:14:05.056317 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:14:05.056352 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:14:05.056363 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:14:05.056378 | orchestrator | 2026-04-11 04:14:05.056389 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 04:14:05.056398 | orchestrator | Saturday 11 April 2026 04:12:04 +0000 (0:00:00.369) 0:00:00.581 ******** 2026-04-11 04:14:05.056408 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-11 04:14:05.056418 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-11 04:14:05.056428 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-11 04:14:05.056438 | orchestrator | 2026-04-11 04:14:05.056493 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-11 04:14:05.056503 | orchestrator | 2026-04-11 04:14:05.056512 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-11 04:14:05.056521 | orchestrator | Saturday 11 April 2026 04:12:04 +0000 (0:00:00.614) 0:00:01.195 ******** 2026-04-11 04:14:05.056532 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 04:14:05.056542 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 04:14:05.056551 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 04:14:05.056560 | orchestrator | 2026-04-11 04:14:05.056570 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 04:14:05.056587 | orchestrator | Saturday 11 April 2026 04:12:05 +0000 (0:00:00.467) 0:00:01.662 ******** 2026-04-11 04:14:05.056599 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:14:05.056610 | orchestrator | 2026-04-11 04:14:05.056619 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-11 04:14:05.056630 | orchestrator | Saturday 11 April 2026 04:12:05 +0000 (0:00:00.637) 0:00:02.299 ******** 2026-04-11 04:14:05.056640 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:14:05.056650 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:14:05.056660 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:14:05.056670 | orchestrator | 2026-04-11 04:14:05.056682 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-11 04:14:05.056693 | orchestrator | Saturday 11 April 2026 04:12:09 +0000 (0:00:03.534) 0:00:05.834 ******** 2026-04-11 04:14:05.056703 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-11 04:14:05.056711 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-11 04:14:05.056719 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-11 04:14:05.056726 | orchestrator | mariadb_bootstrap_restart 2026-04-11 04:14:05.056734 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:14:05.056741 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:14:05.056770 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:14:05.056778 | orchestrator | 2026-04-11 04:14:05.056785 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-11 04:14:05.056793 | orchestrator | skipping: no hosts matched 2026-04-11 04:14:05.056800 | orchestrator | 2026-04-11 04:14:05.056808 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-11 04:14:05.056815 | orchestrator | skipping: no hosts matched 2026-04-11 04:14:05.056822 | orchestrator | 2026-04-11 04:14:05.056829 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-11 04:14:05.056837 | orchestrator | skipping: no hosts matched 2026-04-11 04:14:05.056844 | orchestrator | 2026-04-11 04:14:05.056852 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-11 04:14:05.056859 | orchestrator | 2026-04-11 04:14:05.056866 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-11 04:14:05.056874 | orchestrator | Saturday 11 April 2026 04:14:03 +0000 (0:01:54.306) 0:02:00.141 ******** 2026-04-11 04:14:05.056882 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:14:05.056889 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:14:05.056907 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:14:05.056914 | orchestrator | 2026-04-11 04:14:05.056922 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-11 04:14:05.056929 | orchestrator | Saturday 11 April 2026 04:14:04 +0000 (0:00:00.376) 0:02:00.517 ******** 2026-04-11 04:14:05.056937 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:14:05.056944 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:14:05.056951 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:14:05.056959 | orchestrator | 2026-04-11 04:14:05.056966 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:14:05.056974 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:14:05.056984 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 04:14:05.056991 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 04:14:05.057002 | orchestrator | 2026-04-11 04:14:05.057016 | orchestrator | 2026-04-11 04:14:05.057030 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:14:05.057041 | orchestrator | Saturday 11 April 2026 04:14:04 +0000 (0:00:00.455) 0:02:00.972 ******** 2026-04-11 04:14:05.057050 | orchestrator | =============================================================================== 2026-04-11 04:14:05.057061 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 114.31s 2026-04-11 04:14:05.057091 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.53s 2026-04-11 04:14:05.057101 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.64s 2026-04-11 04:14:05.057112 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-04-11 04:14:05.057123 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.47s 2026-04-11 04:14:05.057134 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.46s 2026-04-11 04:14:05.057140 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.38s 2026-04-11 04:14:05.057146 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-04-11 04:14:05.501559 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-11 04:14:05.509356 | orchestrator | + set -e 2026-04-11 04:14:05.509421 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 04:14:05.510429 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 04:14:05.510500 | orchestrator | ++ INTERACTIVE=false 2026-04-11 04:14:05.510506 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 04:14:05.510510 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 04:14:05.510514 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 04:14:05.512711 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 04:14:05.523241 | orchestrator | 2026-04-11 04:14:05.523300 | orchestrator | # OpenStack endpoints 2026-04-11 04:14:05.523307 | orchestrator | 2026-04-11 04:14:05.523311 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 04:14:05.523316 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 04:14:05.523320 | orchestrator | + export OS_CLOUD=admin 2026-04-11 04:14:05.523324 | orchestrator | + OS_CLOUD=admin 2026-04-11 04:14:05.523328 | orchestrator | + echo 2026-04-11 04:14:05.523333 | orchestrator | + echo '# OpenStack endpoints' 2026-04-11 04:14:05.523336 | orchestrator | + echo 2026-04-11 04:14:05.523340 | orchestrator | + openstack endpoint list 2026-04-11 04:14:08.997761 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-11 04:14:08.997873 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-11 04:14:08.997889 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-11 04:14:08.997927 | orchestrator | | 1ad018f322494f91af0b9043fcc2c9a0 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-11 04:14:08.997939 | orchestrator | | 228105be0b2948e9a23e79a448f9ca37 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-11 04:14:08.997949 | orchestrator | | 2308b568855b42b08dbbcd155749cfa5 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-11 04:14:08.997960 | orchestrator | | 2cbbb8cf45c4493f95d3d0aed6fef3b9 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-11 04:14:08.997971 | orchestrator | | 32ecba1022b24036bfc3b3d5c3a6489d | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-11 04:14:08.997982 | orchestrator | | 384912f39ddb467eb93683232ee4d4c2 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-11 04:14:08.997993 | orchestrator | | 39afd87d874a4f60828340f84b544426 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-11 04:14:08.998004 | orchestrator | | 409e1d36798b44c0b86658159544b7bd | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-11 04:14:08.998071 | orchestrator | | 40bb89e8313343468a3d734dd7cbd5c6 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-11 04:14:08.998084 | orchestrator | | 644dce61aff64c7f90dcec113dea21f1 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-11 04:14:08.998095 | orchestrator | | 67929017d01340b7ab1d62629c678027 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-11 04:14:08.998105 | orchestrator | | 6f82c71c52d04ba594e90c3b48bbfd1d | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-11 04:14:08.998116 | orchestrator | | 74797a99ce4e4c9795e75bd01a55dd7c | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-11 04:14:08.998133 | orchestrator | | 820aee162a8e453b90b5ef6bd134f0b9 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-11 04:14:08.998195 | orchestrator | | 82971e6268294ddfa8217b5dfde9cd2a | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-11 04:14:08.998220 | orchestrator | | 8e0e8e3106cc4af6ad76d548d408d69b | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-11 04:14:08.998234 | orchestrator | | 95ab37693d804a7cbaac4e1eba557333 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-11 04:14:08.998247 | orchestrator | | 9ae5c8d502f545f886cbb7c673b4b3c2 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-11 04:14:08.998260 | orchestrator | | 9d33585912a74ce99fbee9f52f7a0b8c | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-11 04:14:08.998273 | orchestrator | | b4c93dc2ab394baabb0db3db5edd0f8c | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-11 04:14:08.998321 | orchestrator | | b7b597b564464a2cbe22c01ec7114417 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-11 04:14:08.998353 | orchestrator | | bc3e5031a97c44f0b0510cc6f4c29a75 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-11 04:14:08.998384 | orchestrator | | c08ad837e2b9490094423685d0324f2b | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-11 04:14:08.998403 | orchestrator | | d2c301245cb04ec9acdbb9d1f4c32217 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-11 04:14:08.998422 | orchestrator | | d6c5d33ee149402a9cc0f7552fbe7e4e | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-11 04:14:08.998469 | orchestrator | | f2bcd0859bfc44e0941a3e4ffbadc02b | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-11 04:14:08.998490 | orchestrator | | f8f6a0a459ab4056a889dfd8453e3145 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-11 04:14:08.998510 | orchestrator | | fafc524a7a1144878d6c1bc143803385 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-11 04:14:08.998529 | orchestrator | | fc7ad654b4184a1486dd109efceca093 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-11 04:14:08.998549 | orchestrator | | fe926b6d19144e6c8bfc9b41166d819f | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-11 04:14:08.998569 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-11 04:14:09.335093 | orchestrator | 2026-04-11 04:14:09.335169 | orchestrator | # Cinder 2026-04-11 04:14:09.335174 | orchestrator | 2026-04-11 04:14:09.335178 | orchestrator | + echo 2026-04-11 04:14:09.335183 | orchestrator | + echo '# Cinder' 2026-04-11 04:14:09.335188 | orchestrator | + echo 2026-04-11 04:14:09.335192 | orchestrator | + openstack volume service list 2026-04-11 04:14:12.226752 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-11 04:14:12.226824 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-11 04:14:12.226829 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-11 04:14:12.226834 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-11T04:14:04.000000 | 2026-04-11 04:14:12.226838 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-11T04:14:04.000000 | 2026-04-11 04:14:12.226842 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-11T04:14:04.000000 | 2026-04-11 04:14:12.226846 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-11T04:14:04.000000 | 2026-04-11 04:14:12.226849 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-11T04:14:03.000000 | 2026-04-11 04:14:12.226853 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-11T04:14:03.000000 | 2026-04-11 04:14:12.226857 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-11T04:14:10.000000 | 2026-04-11 04:14:12.226861 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-11T04:14:03.000000 | 2026-04-11 04:14:12.226882 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-11T04:14:03.000000 | 2026-04-11 04:14:12.226886 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-11 04:14:12.551839 | orchestrator | 2026-04-11 04:14:12.551956 | orchestrator | # Neutron 2026-04-11 04:14:12.551986 | orchestrator | 2026-04-11 04:14:12.552002 | orchestrator | + echo 2026-04-11 04:14:12.552018 | orchestrator | + echo '# Neutron' 2026-04-11 04:14:12.552034 | orchestrator | + echo 2026-04-11 04:14:12.552048 | orchestrator | + openstack network agent list 2026-04-11 04:14:15.377940 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-11 04:14:15.378094 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-11 04:14:15.378109 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-11 04:14:15.378115 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-11 04:14:15.378121 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-11 04:14:15.378143 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-11 04:14:15.378149 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-11 04:14:15.378155 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-11 04:14:15.378160 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-11 04:14:15.378166 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-11 04:14:15.378171 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-11 04:14:15.378177 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-11 04:14:15.378183 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-11 04:14:15.709361 | orchestrator | + openstack network service provider list 2026-04-11 04:14:18.440991 | orchestrator | +---------------+------+---------+ 2026-04-11 04:14:18.441091 | orchestrator | | Service Type | Name | Default | 2026-04-11 04:14:18.441103 | orchestrator | +---------------+------+---------+ 2026-04-11 04:14:18.441111 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-11 04:14:18.441118 | orchestrator | +---------------+------+---------+ 2026-04-11 04:14:18.794159 | orchestrator | 2026-04-11 04:14:18.794240 | orchestrator | # Nova 2026-04-11 04:14:18.794249 | orchestrator | 2026-04-11 04:14:18.794256 | orchestrator | + echo 2026-04-11 04:14:18.794262 | orchestrator | + echo '# Nova' 2026-04-11 04:14:18.794269 | orchestrator | + echo 2026-04-11 04:14:18.794276 | orchestrator | + openstack compute service list 2026-04-11 04:14:21.671969 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-11 04:14:21.672081 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-11 04:14:21.672101 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-11 04:14:21.672149 | orchestrator | | 54195281-6c78-4989-a690-73fe17a9ba90 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-11T04:14:17.000000 | 2026-04-11 04:14:21.672163 | orchestrator | | f5285534-40d6-4fe7-a7b2-3eceba3c1493 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-11T04:14:11.000000 | 2026-04-11 04:14:21.672175 | orchestrator | | 2fb51b3b-1ebc-4a7c-920c-aff571c4f604 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-11T04:14:12.000000 | 2026-04-11 04:14:21.672191 | orchestrator | | ed34d05d-3659-4290-9e25-0a730bd13a8e | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-11T04:14:16.000000 | 2026-04-11 04:14:21.672206 | orchestrator | | 6534626c-c978-4c8d-a5b4-89821c132613 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-11T04:14:19.000000 | 2026-04-11 04:14:21.672221 | orchestrator | | 978b10e4-dbe1-492a-9dec-5f679b0deb96 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-11T04:14:19.000000 | 2026-04-11 04:14:21.672234 | orchestrator | | bc8b9392-07f4-4e84-90af-cd2dbf51524d | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-11T04:14:20.000000 | 2026-04-11 04:14:21.672248 | orchestrator | | f9732319-bf66-47e0-a77c-16729226de0c | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-11T04:14:11.000000 | 2026-04-11 04:14:21.672262 | orchestrator | | 9ce44803-1f36-44b9-9528-726126aae883 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-11T04:14:12.000000 | 2026-04-11 04:14:21.672275 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-11 04:14:22.004065 | orchestrator | + openstack hypervisor list 2026-04-11 04:14:25.387886 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-11 04:14:25.387979 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-11 04:14:25.387996 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-11 04:14:25.388006 | orchestrator | | 8f8b3ee1-dad8-458c-8a75-048e9df21836 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-11 04:14:25.388017 | orchestrator | | c85402e3-64e6-4aa9-a0c5-fe1f4e0db351 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-11 04:14:25.388026 | orchestrator | | fb7d5519-121f-4ff1-8ec1-3f9934d7d655 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-11 04:14:25.388036 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-11 04:14:25.732856 | orchestrator | 2026-04-11 04:14:25.732953 | orchestrator | # Run OpenStack test play 2026-04-11 04:14:25.732965 | orchestrator | 2026-04-11 04:14:25.732973 | orchestrator | + echo 2026-04-11 04:14:25.732981 | orchestrator | + echo '# Run OpenStack test play' 2026-04-11 04:14:25.732993 | orchestrator | + echo 2026-04-11 04:14:25.733001 | orchestrator | + osism apply --environment openstack test 2026-04-11 04:14:27.975354 | orchestrator | 2026-04-11 04:14:27 | INFO  | Trying to run play test in environment openstack 2026-04-11 04:14:38.081789 | orchestrator | 2026-04-11 04:14:38 | INFO  | Task d24f1b7c-2e5b-4320-83fc-0808f19d8b0e (test) was prepared for execution. 2026-04-11 04:14:38.081864 | orchestrator | 2026-04-11 04:14:38 | INFO  | It takes a moment until task d24f1b7c-2e5b-4320-83fc-0808f19d8b0e (test) has been started and output is visible here. 2026-04-11 04:18:06.127125 | orchestrator | 2026-04-11 04:18:06.127290 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-11 04:18:06.127310 | orchestrator | 2026-04-11 04:18:06.127360 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-11 04:18:06.127373 | orchestrator | Saturday 11 April 2026 04:14:43 +0000 (0:00:00.092) 0:00:00.092 ******** 2026-04-11 04:18:06.127404 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127426 | orchestrator | 2026-04-11 04:18:06.127464 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-11 04:18:06.127476 | orchestrator | Saturday 11 April 2026 04:14:47 +0000 (0:00:04.147) 0:00:04.239 ******** 2026-04-11 04:18:06.127509 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127520 | orchestrator | 2026-04-11 04:18:06.127530 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-11 04:18:06.127540 | orchestrator | Saturday 11 April 2026 04:14:51 +0000 (0:00:04.594) 0:00:08.833 ******** 2026-04-11 04:18:06.127549 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127559 | orchestrator | 2026-04-11 04:18:06.127569 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-11 04:18:06.127578 | orchestrator | Saturday 11 April 2026 04:14:59 +0000 (0:00:07.610) 0:00:16.443 ******** 2026-04-11 04:18:06.127588 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127597 | orchestrator | 2026-04-11 04:18:06.127607 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-11 04:18:06.127617 | orchestrator | Saturday 11 April 2026 04:15:04 +0000 (0:00:04.801) 0:00:21.245 ******** 2026-04-11 04:18:06.127626 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127636 | orchestrator | 2026-04-11 04:18:06.127646 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-11 04:18:06.127655 | orchestrator | Saturday 11 April 2026 04:15:08 +0000 (0:00:04.551) 0:00:25.797 ******** 2026-04-11 04:18:06.127667 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-11 04:18:06.127679 | orchestrator | changed: [localhost] => (item=member) 2026-04-11 04:18:06.127691 | orchestrator | changed: [localhost] => (item=creator) 2026-04-11 04:18:06.127703 | orchestrator | 2026-04-11 04:18:06.127714 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-11 04:18:06.127726 | orchestrator | Saturday 11 April 2026 04:15:21 +0000 (0:00:12.680) 0:00:38.477 ******** 2026-04-11 04:18:06.127736 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127747 | orchestrator | 2026-04-11 04:18:06.127775 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-11 04:18:06.127788 | orchestrator | Saturday 11 April 2026 04:15:25 +0000 (0:00:04.589) 0:00:43.066 ******** 2026-04-11 04:18:06.127799 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127810 | orchestrator | 2026-04-11 04:18:06.127827 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-11 04:18:06.127842 | orchestrator | Saturday 11 April 2026 04:15:31 +0000 (0:00:05.094) 0:00:48.160 ******** 2026-04-11 04:18:06.127868 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127887 | orchestrator | 2026-04-11 04:18:06.127903 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-11 04:18:06.127919 | orchestrator | Saturday 11 April 2026 04:15:35 +0000 (0:00:04.654) 0:00:52.815 ******** 2026-04-11 04:18:06.127935 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.127951 | orchestrator | 2026-04-11 04:18:06.127967 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-11 04:18:06.127982 | orchestrator | Saturday 11 April 2026 04:15:39 +0000 (0:00:04.242) 0:00:57.057 ******** 2026-04-11 04:18:06.128000 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.128016 | orchestrator | 2026-04-11 04:18:06.128033 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-11 04:18:06.128049 | orchestrator | Saturday 11 April 2026 04:15:44 +0000 (0:00:04.479) 0:01:01.537 ******** 2026-04-11 04:18:06.128067 | orchestrator | changed: [localhost] 2026-04-11 04:18:06.128084 | orchestrator | 2026-04-11 04:18:06.128099 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-11 04:18:06.128115 | orchestrator | Saturday 11 April 2026 04:15:48 +0000 (0:00:04.325) 0:01:05.863 ******** 2026-04-11 04:18:06.128125 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-11 04:18:06.128135 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-11 04:18:06.128145 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-11 04:18:06.128154 | orchestrator | 2026-04-11 04:18:06.128165 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-11 04:18:06.128186 | orchestrator | Saturday 11 April 2026 04:16:03 +0000 (0:00:15.131) 0:01:20.994 ******** 2026-04-11 04:18:06.128197 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-11 04:18:06.128208 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-11 04:18:06.128218 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-11 04:18:06.128227 | orchestrator | 2026-04-11 04:18:06.128237 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-11 04:18:06.128247 | orchestrator | Saturday 11 April 2026 04:16:20 +0000 (0:00:16.622) 0:01:37.616 ******** 2026-04-11 04:18:06.128256 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-11 04:18:06.128273 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-11 04:18:06.128283 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-11 04:18:06.128293 | orchestrator | 2026-04-11 04:18:06.128302 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-11 04:18:06.128312 | orchestrator | 2026-04-11 04:18:06.128322 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-11 04:18:06.128361 | orchestrator | Saturday 11 April 2026 04:16:52 +0000 (0:00:32.224) 0:02:09.840 ******** 2026-04-11 04:18:06.128379 | orchestrator | ok: [localhost] 2026-04-11 04:18:06.128394 | orchestrator | 2026-04-11 04:18:06.128409 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-11 04:18:06.128424 | orchestrator | Saturday 11 April 2026 04:16:56 +0000 (0:00:03.952) 0:02:13.793 ******** 2026-04-11 04:18:06.128467 | orchestrator | skipping: [localhost] 2026-04-11 04:18:06.128483 | orchestrator | 2026-04-11 04:18:06.128500 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-11 04:18:06.128517 | orchestrator | Saturday 11 April 2026 04:16:56 +0000 (0:00:00.058) 0:02:13.851 ******** 2026-04-11 04:18:06.128533 | orchestrator | skipping: [localhost] 2026-04-11 04:18:06.128549 | orchestrator | 2026-04-11 04:18:06.128558 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-11 04:18:06.128568 | orchestrator | Saturday 11 April 2026 04:16:56 +0000 (0:00:00.047) 0:02:13.899 ******** 2026-04-11 04:18:06.128578 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-11 04:18:06.128587 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-11 04:18:06.128597 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-11 04:18:06.128606 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-11 04:18:06.128616 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-11 04:18:06.128625 | orchestrator | skipping: [localhost] 2026-04-11 04:18:06.128634 | orchestrator | 2026-04-11 04:18:06.128645 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-11 04:18:06.128661 | orchestrator | Saturday 11 April 2026 04:16:56 +0000 (0:00:00.166) 0:02:14.065 ******** 2026-04-11 04:18:06.128685 | orchestrator | skipping: [localhost] 2026-04-11 04:18:06.128703 | orchestrator | 2026-04-11 04:18:06.128718 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-11 04:18:06.128733 | orchestrator | Saturday 11 April 2026 04:16:57 +0000 (0:00:00.151) 0:02:14.217 ******** 2026-04-11 04:18:06.128748 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-11 04:18:06.128761 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-11 04:18:06.128776 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-11 04:18:06.128792 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-11 04:18:06.128822 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-11 04:18:06.128839 | orchestrator | 2026-04-11 04:18:06.128856 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-11 04:18:06.128871 | orchestrator | Saturday 11 April 2026 04:17:02 +0000 (0:00:05.240) 0:02:19.457 ******** 2026-04-11 04:18:06.128887 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-11 04:18:06.128899 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-11 04:18:06.128908 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-11 04:18:06.128918 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-11 04:18:06.128930 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j530682253455.3829', 'results_file': '/ansible/.ansible_async/j530682253455.3829', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:18:06.128944 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j77745850373.3854', 'results_file': '/ansible/.ansible_async/j77745850373.3854', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:18:06.128953 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j275308845842.3879', 'results_file': '/ansible/.ansible_async/j275308845842.3879', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:18:06.128964 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j550529775515.3904', 'results_file': '/ansible/.ansible_async/j550529775515.3904', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:18:06.128973 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-11 04:18:06.128991 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j671081640674.3929', 'results_file': '/ansible/.ansible_async/j671081640674.3929', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:18:06.129001 | orchestrator | 2026-04-11 04:18:06.129011 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-11 04:18:06.129021 | orchestrator | Saturday 11 April 2026 04:18:00 +0000 (0:00:58.428) 0:03:17.885 ******** 2026-04-11 04:18:06.129041 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-11 04:19:20.667836 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-11 04:19:20.667958 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-11 04:19:20.667975 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-11 04:19:20.667987 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-11 04:19:20.667998 | orchestrator | 2026-04-11 04:19:20.668010 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-11 04:19:20.668021 | orchestrator | Saturday 11 April 2026 04:18:06 +0000 (0:00:05.311) 0:03:23.197 ******** 2026-04-11 04:19:20.668032 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-11 04:19:20.668046 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j944410626655.4041', 'results_file': '/ansible/.ansible_async/j944410626655.4041', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668083 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j408624521124.4066', 'results_file': '/ansible/.ansible_async/j408624521124.4066', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668095 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j219205776155.4091', 'results_file': '/ansible/.ansible_async/j219205776155.4091', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668106 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j298646077733.4116', 'results_file': '/ansible/.ansible_async/j298646077733.4116', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668117 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j427708518125.4141', 'results_file': '/ansible/.ansible_async/j427708518125.4141', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668128 | orchestrator | 2026-04-11 04:19:20.668139 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-11 04:19:20.668150 | orchestrator | Saturday 11 April 2026 04:18:15 +0000 (0:00:09.565) 0:03:32.762 ******** 2026-04-11 04:19:20.668160 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-11 04:19:20.668171 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-11 04:19:20.668181 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-11 04:19:20.668192 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-11 04:19:20.668202 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-11 04:19:20.668213 | orchestrator | 2026-04-11 04:19:20.668224 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-11 04:19:20.668235 | orchestrator | Saturday 11 April 2026 04:18:20 +0000 (0:00:05.267) 0:03:38.030 ******** 2026-04-11 04:19:20.668245 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-11 04:19:20.668256 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j571323026025.4210', 'results_file': '/ansible/.ansible_async/j571323026025.4210', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668267 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j693412516087.4235', 'results_file': '/ansible/.ansible_async/j693412516087.4235', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668278 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j176553459729.4261', 'results_file': '/ansible/.ansible_async/j176553459729.4261', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668304 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j752042183752.4287', 'results_file': '/ansible/.ansible_async/j752042183752.4287', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668332 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j417654437901.4313', 'results_file': '/ansible/.ansible_async/j417654437901.4313', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-11 04:19:20.668344 | orchestrator | 2026-04-11 04:19:20.668356 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-11 04:19:20.668381 | orchestrator | Saturday 11 April 2026 04:18:32 +0000 (0:00:11.822) 0:03:49.852 ******** 2026-04-11 04:19:20.668402 | orchestrator | changed: [localhost] 2026-04-11 04:19:20.668417 | orchestrator | 2026-04-11 04:19:20.668430 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-11 04:19:20.668482 | orchestrator | Saturday 11 April 2026 04:18:39 +0000 (0:00:06.739) 0:03:56.591 ******** 2026-04-11 04:19:20.668493 | orchestrator | changed: [localhost] 2026-04-11 04:19:20.668504 | orchestrator | 2026-04-11 04:19:20.668515 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-11 04:19:20.668526 | orchestrator | Saturday 11 April 2026 04:18:53 +0000 (0:00:14.311) 0:04:10.903 ******** 2026-04-11 04:19:20.668537 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-11 04:19:20.668548 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-11 04:19:20.668559 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-11 04:19:20.668569 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-11 04:19:20.668580 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-11 04:19:20.668591 | orchestrator | 2026-04-11 04:19:20.668602 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-11 04:19:20.668612 | orchestrator | Saturday 11 April 2026 04:19:20 +0000 (0:00:26.324) 0:04:37.227 ******** 2026-04-11 04:19:20.668623 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-11 04:19:20.668634 | orchestrator |  "msg": "test: 192.168.112.166" 2026-04-11 04:19:20.668645 | orchestrator | } 2026-04-11 04:19:20.668656 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-11 04:19:20.668667 | orchestrator |  "msg": "test-1: 192.168.112.108" 2026-04-11 04:19:20.668678 | orchestrator | } 2026-04-11 04:19:20.668689 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-11 04:19:20.668699 | orchestrator |  "msg": "test-2: 192.168.112.188" 2026-04-11 04:19:20.668710 | orchestrator | } 2026-04-11 04:19:20.668721 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-11 04:19:20.668732 | orchestrator |  "msg": "test-3: 192.168.112.136" 2026-04-11 04:19:20.668742 | orchestrator | } 2026-04-11 04:19:20.668753 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-11 04:19:20.668764 | orchestrator |  "msg": "test-4: 192.168.112.146" 2026-04-11 04:19:20.668774 | orchestrator | } 2026-04-11 04:19:20.668785 | orchestrator | 2026-04-11 04:19:20.668796 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:19:20.668807 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 04:19:20.668819 | orchestrator | 2026-04-11 04:19:20.668830 | orchestrator | 2026-04-11 04:19:20.668841 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:19:20.668852 | orchestrator | Saturday 11 April 2026 04:19:20 +0000 (0:00:00.166) 0:04:37.394 ******** 2026-04-11 04:19:20.668862 | orchestrator | =============================================================================== 2026-04-11 04:19:20.668873 | orchestrator | Wait for instance creation to complete --------------------------------- 58.43s 2026-04-11 04:19:20.668884 | orchestrator | Create test routers ---------------------------------------------------- 32.22s 2026-04-11 04:19:20.668895 | orchestrator | Create floating ip addresses ------------------------------------------- 26.32s 2026-04-11 04:19:20.668906 | orchestrator | Create test subnets ---------------------------------------------------- 16.62s 2026-04-11 04:19:20.668916 | orchestrator | Create test networks --------------------------------------------------- 15.13s 2026-04-11 04:19:20.668927 | orchestrator | Attach test volume ----------------------------------------------------- 14.31s 2026-04-11 04:19:20.668938 | orchestrator | Add member roles to user test ------------------------------------------ 12.68s 2026-04-11 04:19:20.668949 | orchestrator | Wait for tags to be added ---------------------------------------------- 11.82s 2026-04-11 04:19:20.668959 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.57s 2026-04-11 04:19:20.668978 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.61s 2026-04-11 04:19:20.668989 | orchestrator | Create test volume ------------------------------------------------------ 6.74s 2026-04-11 04:19:20.669000 | orchestrator | Add metadata to instances ----------------------------------------------- 5.31s 2026-04-11 04:19:20.669011 | orchestrator | Add tag to instances ---------------------------------------------------- 5.27s 2026-04-11 04:19:20.669021 | orchestrator | Create test instances --------------------------------------------------- 5.24s 2026-04-11 04:19:20.669032 | orchestrator | Create ssh security group ----------------------------------------------- 5.09s 2026-04-11 04:19:20.669042 | orchestrator | Create test project ----------------------------------------------------- 4.80s 2026-04-11 04:19:20.669053 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.65s 2026-04-11 04:19:20.669064 | orchestrator | Create test-admin user -------------------------------------------------- 4.59s 2026-04-11 04:19:20.669090 | orchestrator | Create test server group ------------------------------------------------ 4.59s 2026-04-11 04:19:20.669101 | orchestrator | Create test user -------------------------------------------------------- 4.55s 2026-04-11 04:19:21.030887 | orchestrator | + server_list 2026-04-11 04:19:21.030987 | orchestrator | + openstack --os-cloud test server list 2026-04-11 04:19:25.158746 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-11 04:19:25.158835 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-11 04:19:25.158846 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-11 04:19:25.158851 | orchestrator | | 3b8ec0b1-a556-45d2-80eb-d40412651344 | test-4 | ACTIVE | test-3=192.168.112.146, 192.168.202.244 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 04:19:25.158855 | orchestrator | | c4273570-b6e3-4b24-b5a3-11f3111313ad | test-3 | ACTIVE | test-2=192.168.112.136, 192.168.201.190 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 04:19:25.158859 | orchestrator | | cf244b49-ecc5-446b-858f-201bcece8db1 | test-2 | ACTIVE | test-2=192.168.112.188, 192.168.201.146 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 04:19:25.158863 | orchestrator | | 9e9e3efb-7bd1-4de6-a196-833998bea147 | test | ACTIVE | test-1=192.168.112.166, 192.168.200.215 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 04:19:25.158867 | orchestrator | | a2fdb24a-1036-4d5a-94e9-f935448d3ed1 | test-1 | ACTIVE | test-1=192.168.112.108, 192.168.200.75 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 04:19:25.158871 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-11 04:19:25.515812 | orchestrator | + openstack --os-cloud test server show test 2026-04-11 04:19:29.107030 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:29.107146 | orchestrator | | Field | Value | 2026-04-11 04:19:29.107157 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:29.107184 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 04:19:29.107194 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 04:19:29.107203 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 04:19:29.107214 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-11 04:19:29.107223 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 04:19:29.107233 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 04:19:29.107256 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 04:19:29.107262 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 04:19:29.107269 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 04:19:29.107289 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 04:19:29.107299 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 04:19:29.107323 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 04:19:29.107357 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 04:19:29.107373 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 04:19:29.107382 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 04:19:29.107392 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:34.000000 | 2026-04-11 04:19:29.107408 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 04:19:29.107418 | orchestrator | | accessIPv4 | | 2026-04-11 04:19:29.107475 | orchestrator | | accessIPv6 | | 2026-04-11 04:19:29.107483 | orchestrator | | addresses | test-1=192.168.112.166, 192.168.200.215 | 2026-04-11 04:19:29.107489 | orchestrator | | config_drive | | 2026-04-11 04:19:29.107494 | orchestrator | | created | 2026-04-11T04:17:07Z | 2026-04-11 04:19:29.107500 | orchestrator | | description | None | 2026-04-11 04:19:29.107509 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 04:19:29.107514 | orchestrator | | hostId | 2771b4cb1e58059f56d80dcaed3b9287c9915a466d4201a362df41a3 | 2026-04-11 04:19:29.107520 | orchestrator | | host_status | None | 2026-04-11 04:19:29.107531 | orchestrator | | id | 9e9e3efb-7bd1-4de6-a196-833998bea147 | 2026-04-11 04:19:29.107537 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 04:19:29.107547 | orchestrator | | key_name | test | 2026-04-11 04:19:29.107553 | orchestrator | | locked | False | 2026-04-11 04:19:29.107558 | orchestrator | | locked_reason | None | 2026-04-11 04:19:29.107564 | orchestrator | | name | test | 2026-04-11 04:19:29.107573 | orchestrator | | pinned_availability_zone | None | 2026-04-11 04:19:29.107579 | orchestrator | | progress | 0 | 2026-04-11 04:19:29.107586 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 04:19:29.107592 | orchestrator | | properties | hostname='test' | 2026-04-11 04:19:29.107603 | orchestrator | | security_groups | name='icmp' | 2026-04-11 04:19:29.107614 | orchestrator | | | name='ssh' | 2026-04-11 04:19:29.107620 | orchestrator | | server_groups | None | 2026-04-11 04:19:29.107627 | orchestrator | | status | ACTIVE | 2026-04-11 04:19:29.107634 | orchestrator | | tags | test | 2026-04-11 04:19:29.107643 | orchestrator | | trusted_image_certificates | None | 2026-04-11 04:19:29.107656 | orchestrator | | updated | 2026-04-11T04:18:07Z | 2026-04-11 04:19:29.107666 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 04:19:29.107676 | orchestrator | | volumes_attached | delete_on_termination='True', id='f20dc248-0394-4260-8d30-9797cea79271' | 2026-04-11 04:19:29.107686 | orchestrator | | | delete_on_termination='False', id='334b69c3-7ca1-480d-bfc8-8e7b77f02a1d' | 2026-04-11 04:19:29.112493 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:29.464120 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-11 04:19:32.687975 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:32.688145 | orchestrator | | Field | Value | 2026-04-11 04:19:32.688164 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:32.688176 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 04:19:32.688194 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 04:19:32.688237 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 04:19:32.688257 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-11 04:19:32.688273 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 04:19:32.688324 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 04:19:32.688372 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 04:19:32.688392 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 04:19:32.688408 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 04:19:32.688426 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 04:19:32.688521 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 04:19:32.688536 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 04:19:32.688553 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 04:19:32.688572 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 04:19:32.688589 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 04:19:32.688621 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:34.000000 | 2026-04-11 04:19:32.688650 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 04:19:32.688666 | orchestrator | | accessIPv4 | | 2026-04-11 04:19:32.688676 | orchestrator | | accessIPv6 | | 2026-04-11 04:19:32.688686 | orchestrator | | addresses | test-1=192.168.112.108, 192.168.200.75 | 2026-04-11 04:19:32.688696 | orchestrator | | config_drive | | 2026-04-11 04:19:32.688717 | orchestrator | | created | 2026-04-11T04:17:07Z | 2026-04-11 04:19:32.688732 | orchestrator | | description | None | 2026-04-11 04:19:32.688742 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 04:19:32.688763 | orchestrator | | hostId | 2771b4cb1e58059f56d80dcaed3b9287c9915a466d4201a362df41a3 | 2026-04-11 04:19:32.688779 | orchestrator | | host_status | None | 2026-04-11 04:19:32.688807 | orchestrator | | id | a2fdb24a-1036-4d5a-94e9-f935448d3ed1 | 2026-04-11 04:19:32.688825 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 04:19:32.688843 | orchestrator | | key_name | test | 2026-04-11 04:19:32.688861 | orchestrator | | locked | False | 2026-04-11 04:19:32.688878 | orchestrator | | locked_reason | None | 2026-04-11 04:19:32.688897 | orchestrator | | name | test-1 | 2026-04-11 04:19:32.688914 | orchestrator | | pinned_availability_zone | None | 2026-04-11 04:19:32.688941 | orchestrator | | progress | 0 | 2026-04-11 04:19:32.688952 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 04:19:32.688962 | orchestrator | | properties | hostname='test-1' | 2026-04-11 04:19:32.688980 | orchestrator | | security_groups | name='icmp' | 2026-04-11 04:19:32.688991 | orchestrator | | | name='ssh' | 2026-04-11 04:19:32.689001 | orchestrator | | server_groups | None | 2026-04-11 04:19:32.689011 | orchestrator | | status | ACTIVE | 2026-04-11 04:19:32.689021 | orchestrator | | tags | test | 2026-04-11 04:19:32.689030 | orchestrator | | trusted_image_certificates | None | 2026-04-11 04:19:32.689053 | orchestrator | | updated | 2026-04-11T04:18:08Z | 2026-04-11 04:19:32.689063 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 04:19:32.689073 | orchestrator | | volumes_attached | delete_on_termination='True', id='d66ffbb8-185a-4949-b4f5-74ecaf7800f5' | 2026-04-11 04:19:32.693334 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:33.010939 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-11 04:19:36.220148 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:36.220272 | orchestrator | | Field | Value | 2026-04-11 04:19:36.220299 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:36.220316 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 04:19:36.220335 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 04:19:36.220390 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 04:19:36.220409 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-11 04:19:36.220427 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 04:19:36.220471 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 04:19:36.220510 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 04:19:36.220526 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 04:19:36.220542 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 04:19:36.220559 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 04:19:36.220575 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 04:19:36.220592 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 04:19:36.220630 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 04:19:36.220649 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 04:19:36.220665 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 04:19:36.220677 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:35.000000 | 2026-04-11 04:19:36.220696 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 04:19:36.220708 | orchestrator | | accessIPv4 | | 2026-04-11 04:19:36.220720 | orchestrator | | accessIPv6 | | 2026-04-11 04:19:36.220731 | orchestrator | | addresses | test-2=192.168.112.188, 192.168.201.146 | 2026-04-11 04:19:36.220742 | orchestrator | | config_drive | | 2026-04-11 04:19:36.220760 | orchestrator | | created | 2026-04-11T04:17:08Z | 2026-04-11 04:19:36.220776 | orchestrator | | description | None | 2026-04-11 04:19:36.220788 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 04:19:36.220799 | orchestrator | | hostId | 511cad18ea7291a59af43418ed7a0e2d21e972767f13bde6f4d27ac9 | 2026-04-11 04:19:36.220810 | orchestrator | | host_status | None | 2026-04-11 04:19:36.220830 | orchestrator | | id | cf244b49-ecc5-446b-858f-201bcece8db1 | 2026-04-11 04:19:36.220841 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 04:19:36.220853 | orchestrator | | key_name | test | 2026-04-11 04:19:36.220864 | orchestrator | | locked | False | 2026-04-11 04:19:36.220882 | orchestrator | | locked_reason | None | 2026-04-11 04:19:36.220893 | orchestrator | | name | test-2 | 2026-04-11 04:19:36.220905 | orchestrator | | pinned_availability_zone | None | 2026-04-11 04:19:36.220917 | orchestrator | | progress | 0 | 2026-04-11 04:19:36.220929 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 04:19:36.220940 | orchestrator | | properties | hostname='test-2' | 2026-04-11 04:19:36.220957 | orchestrator | | security_groups | name='icmp' | 2026-04-11 04:19:36.220968 | orchestrator | | | name='ssh' | 2026-04-11 04:19:36.220979 | orchestrator | | server_groups | None | 2026-04-11 04:19:36.221343 | orchestrator | | status | ACTIVE | 2026-04-11 04:19:36.221357 | orchestrator | | tags | test | 2026-04-11 04:19:36.221367 | orchestrator | | trusted_image_certificates | None | 2026-04-11 04:19:36.221377 | orchestrator | | updated | 2026-04-11T04:18:08Z | 2026-04-11 04:19:36.221386 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 04:19:36.221396 | orchestrator | | volumes_attached | delete_on_termination='True', id='6affec2b-cc2e-4873-add9-8d3ff5b665a0' | 2026-04-11 04:19:36.224897 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:36.579151 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-11 04:19:39.784004 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:39.784075 | orchestrator | | Field | Value | 2026-04-11 04:19:39.784100 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:39.784115 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 04:19:39.784120 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 04:19:39.784123 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 04:19:39.784127 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-11 04:19:39.784131 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 04:19:39.784135 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 04:19:39.784150 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 04:19:39.784154 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 04:19:39.784158 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 04:19:39.784169 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 04:19:39.784178 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 04:19:39.784185 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 04:19:39.784191 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 04:19:39.784198 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 04:19:39.784204 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 04:19:39.784211 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:36.000000 | 2026-04-11 04:19:39.784221 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 04:19:39.784227 | orchestrator | | accessIPv4 | | 2026-04-11 04:19:39.784240 | orchestrator | | accessIPv6 | | 2026-04-11 04:19:39.784244 | orchestrator | | addresses | test-2=192.168.112.136, 192.168.201.190 | 2026-04-11 04:19:39.784251 | orchestrator | | config_drive | | 2026-04-11 04:19:39.784255 | orchestrator | | created | 2026-04-11T04:17:09Z | 2026-04-11 04:19:39.784258 | orchestrator | | description | None | 2026-04-11 04:19:39.784262 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 04:19:39.784266 | orchestrator | | hostId | 511cad18ea7291a59af43418ed7a0e2d21e972767f13bde6f4d27ac9 | 2026-04-11 04:19:39.784270 | orchestrator | | host_status | None | 2026-04-11 04:19:39.784277 | orchestrator | | id | c4273570-b6e3-4b24-b5a3-11f3111313ad | 2026-04-11 04:19:39.784286 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 04:19:39.784289 | orchestrator | | key_name | test | 2026-04-11 04:19:39.784293 | orchestrator | | locked | False | 2026-04-11 04:19:39.784300 | orchestrator | | locked_reason | None | 2026-04-11 04:19:39.784304 | orchestrator | | name | test-3 | 2026-04-11 04:19:39.784307 | orchestrator | | pinned_availability_zone | None | 2026-04-11 04:19:39.784311 | orchestrator | | progress | 0 | 2026-04-11 04:19:39.784315 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 04:19:39.784319 | orchestrator | | properties | hostname='test-3' | 2026-04-11 04:19:39.784333 | orchestrator | | security_groups | name='icmp' | 2026-04-11 04:19:39.784337 | orchestrator | | | name='ssh' | 2026-04-11 04:19:39.784341 | orchestrator | | server_groups | None | 2026-04-11 04:19:39.784345 | orchestrator | | status | ACTIVE | 2026-04-11 04:19:39.784352 | orchestrator | | tags | test | 2026-04-11 04:19:39.784356 | orchestrator | | trusted_image_certificates | None | 2026-04-11 04:19:39.784359 | orchestrator | | updated | 2026-04-11T04:18:09Z | 2026-04-11 04:19:39.784363 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 04:19:39.784367 | orchestrator | | volumes_attached | delete_on_termination='True', id='4a1527a7-4e68-4867-a03a-ee81f91d4889' | 2026-04-11 04:19:39.789564 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:40.144351 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-11 04:19:43.341238 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:43.341329 | orchestrator | | Field | Value | 2026-04-11 04:19:43.341341 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:43.341366 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 04:19:43.341376 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 04:19:43.341384 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 04:19:43.341392 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-11 04:19:43.341399 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 04:19:43.341406 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 04:19:43.341523 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 04:19:43.341536 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 04:19:43.341544 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 04:19:43.341552 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 04:19:43.341560 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 04:19:43.341568 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 04:19:43.341577 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 04:19:43.341585 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 04:19:43.341594 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 04:19:43.341611 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:38.000000 | 2026-04-11 04:19:43.341625 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 04:19:43.341699 | orchestrator | | accessIPv4 | | 2026-04-11 04:19:43.341713 | orchestrator | | accessIPv6 | | 2026-04-11 04:19:43.341722 | orchestrator | | addresses | test-3=192.168.112.146, 192.168.202.244 | 2026-04-11 04:19:43.341734 | orchestrator | | config_drive | | 2026-04-11 04:19:43.341743 | orchestrator | | created | 2026-04-11T04:17:12Z | 2026-04-11 04:19:43.341752 | orchestrator | | description | None | 2026-04-11 04:19:43.341761 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 04:19:43.341776 | orchestrator | | hostId | a016dd2d1f272dd2c23f3e85b9255e71f0f248d00bb8edd8d4b20d00 | 2026-04-11 04:19:43.341785 | orchestrator | | host_status | None | 2026-04-11 04:19:43.341800 | orchestrator | | id | 3b8ec0b1-a556-45d2-80eb-d40412651344 | 2026-04-11 04:19:43.341810 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 04:19:43.341819 | orchestrator | | key_name | test | 2026-04-11 04:19:43.341828 | orchestrator | | locked | False | 2026-04-11 04:19:43.341841 | orchestrator | | locked_reason | None | 2026-04-11 04:19:43.341850 | orchestrator | | name | test-4 | 2026-04-11 04:19:43.341859 | orchestrator | | pinned_availability_zone | None | 2026-04-11 04:19:43.341873 | orchestrator | | progress | 0 | 2026-04-11 04:19:43.341883 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 04:19:43.341891 | orchestrator | | properties | hostname='test-4' | 2026-04-11 04:19:43.341905 | orchestrator | | security_groups | name='icmp' | 2026-04-11 04:19:43.341914 | orchestrator | | | name='ssh' | 2026-04-11 04:19:43.341923 | orchestrator | | server_groups | None | 2026-04-11 04:19:43.341932 | orchestrator | | status | ACTIVE | 2026-04-11 04:19:43.341945 | orchestrator | | tags | test | 2026-04-11 04:19:43.341954 | orchestrator | | trusted_image_certificates | None | 2026-04-11 04:19:43.341962 | orchestrator | | updated | 2026-04-11T04:18:10Z | 2026-04-11 04:19:43.341977 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 04:19:43.341986 | orchestrator | | volumes_attached | delete_on_termination='True', id='4448a5ff-2f01-40e7-92ec-27aa7ae35dd0' | 2026-04-11 04:19:43.347833 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 04:19:43.698667 | orchestrator | + server_ping 2026-04-11 04:19:43.699918 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-11 04:19:43.700010 | orchestrator | ++ tr -d '\r' 2026-04-11 04:19:46.825050 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 04:19:46.825119 | orchestrator | + ping -c3 192.168.112.146 2026-04-11 04:19:46.842302 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-04-11 04:19:46.842387 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=9.88 ms 2026-04-11 04:19:47.836330 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=2.73 ms 2026-04-11 04:19:48.838365 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=2.38 ms 2026-04-11 04:19:48.838486 | orchestrator | 2026-04-11 04:19:48.838504 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-04-11 04:19:48.838515 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-11 04:19:48.838527 | orchestrator | rtt min/avg/max/mdev = 2.375/4.995/9.881/3.457 ms 2026-04-11 04:19:48.838550 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 04:19:48.838563 | orchestrator | + ping -c3 192.168.112.188 2026-04-11 04:19:48.852266 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-11 04:19:48.852339 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=10.2 ms 2026-04-11 04:19:49.846481 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.92 ms 2026-04-11 04:19:50.847951 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=2.25 ms 2026-04-11 04:19:50.848044 | orchestrator | 2026-04-11 04:19:50.848058 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-11 04:19:50.848070 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-11 04:19:50.848080 | orchestrator | rtt min/avg/max/mdev = 2.249/5.125/10.203/3.601 ms 2026-04-11 04:19:50.848091 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 04:19:50.848102 | orchestrator | + ping -c3 192.168.112.108 2026-04-11 04:19:50.863762 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-04-11 04:19:50.863845 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=10.8 ms 2026-04-11 04:19:51.856578 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.53 ms 2026-04-11 04:19:52.857895 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.12 ms 2026-04-11 04:19:52.858065 | orchestrator | 2026-04-11 04:19:52.858093 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-04-11 04:19:52.858110 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-11 04:19:52.858126 | orchestrator | rtt min/avg/max/mdev = 2.118/5.143/10.780/3.989 ms 2026-04-11 04:19:52.858486 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 04:19:52.858520 | orchestrator | + ping -c3 192.168.112.136 2026-04-11 04:19:52.869214 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2026-04-11 04:19:52.869284 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=7.73 ms 2026-04-11 04:19:53.865967 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.28 ms 2026-04-11 04:19:54.866925 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.80 ms 2026-04-11 04:19:54.866992 | orchestrator | 2026-04-11 04:19:54.866999 | orchestrator | --- 192.168.112.136 ping statistics --- 2026-04-11 04:19:54.867004 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-11 04:19:54.867008 | orchestrator | rtt min/avg/max/mdev = 1.801/3.937/7.732/2.690 ms 2026-04-11 04:19:54.867266 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 04:19:54.867277 | orchestrator | + ping -c3 192.168.112.166 2026-04-11 04:19:54.877189 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-04-11 04:19:54.877262 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=5.91 ms 2026-04-11 04:19:55.875176 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.47 ms 2026-04-11 04:19:56.877493 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.88 ms 2026-04-11 04:19:56.877594 | orchestrator | 2026-04-11 04:19:56.877610 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-04-11 04:19:56.877622 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-11 04:19:56.877633 | orchestrator | rtt min/avg/max/mdev = 1.877/3.418/5.908/1.777 ms 2026-04-11 04:19:56.877655 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-11 04:19:57.098390 | orchestrator | ok: Runtime: 0:11:14.040087 2026-04-11 04:19:57.164493 | 2026-04-11 04:19:57.164712 | TASK [Run tempest] 2026-04-11 04:19:57.705027 | orchestrator | skipping: Conditional result was False 2026-04-11 04:19:57.723507 | 2026-04-11 04:19:57.723680 | TASK [Check prometheus alert status] 2026-04-11 04:19:58.260242 | orchestrator | skipping: Conditional result was False 2026-04-11 04:19:58.273503 | 2026-04-11 04:19:58.273646 | PLAY [Upgrade testbed] 2026-04-11 04:19:58.284963 | 2026-04-11 04:19:58.285076 | TASK [Print next ceph version] 2026-04-11 04:19:58.363576 | orchestrator | ok 2026-04-11 04:19:58.373619 | 2026-04-11 04:19:58.373744 | TASK [Print next openstack version] 2026-04-11 04:19:58.445269 | orchestrator | ok 2026-04-11 04:19:58.453592 | 2026-04-11 04:19:58.453710 | TASK [Print next manager version] 2026-04-11 04:19:58.525040 | orchestrator | ok 2026-04-11 04:19:58.532089 | 2026-04-11 04:19:58.532205 | TASK [Set cloud fact (Zuul deployment)] 2026-04-11 04:19:58.603496 | orchestrator | ok 2026-04-11 04:19:58.619048 | 2026-04-11 04:19:58.619217 | TASK [Set cloud fact (local deployment)] 2026-04-11 04:19:58.655726 | orchestrator | skipping: Conditional result was False 2026-04-11 04:19:58.675363 | 2026-04-11 04:19:58.675580 | TASK [Fetch manager address] 2026-04-11 04:19:58.971441 | orchestrator | ok 2026-04-11 04:19:58.981453 | 2026-04-11 04:19:58.981608 | TASK [Set manager_host address] 2026-04-11 04:19:59.061553 | orchestrator | ok 2026-04-11 04:19:59.072995 | 2026-04-11 04:19:59.073128 | TASK [Run upgrade] 2026-04-11 04:19:59.785225 | orchestrator | + set -e 2026-04-11 04:19:59.785372 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-11 04:19:59.785388 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-11 04:19:59.785395 | orchestrator | + CEPH_VERSION=reef 2026-04-11 04:19:59.785402 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-11 04:19:59.785408 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-11 04:19:59.785416 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0 reef 2024.2 kolla/release' 2026-04-11 04:19:59.791493 | orchestrator | + set -e 2026-04-11 04:19:59.791586 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 04:19:59.791601 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 04:19:59.791615 | orchestrator | ++ INTERACTIVE=false 2026-04-11 04:19:59.791622 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 04:19:59.791632 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 04:19:59.792385 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-04-11 04:19:59.829821 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-04-11 04:19:59.831006 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-11 04:19:59.870929 | orchestrator | 2026-04-11 04:19:59.871017 | orchestrator | # UPGRADE MANAGER 2026-04-11 04:19:59.871036 | orchestrator | 2026-04-11 04:19:59.871044 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-04-11 04:19:59.871053 | orchestrator | + echo 2026-04-11 04:19:59.871063 | orchestrator | + echo '# UPGRADE MANAGER' 2026-04-11 04:19:59.871071 | orchestrator | + echo 2026-04-11 04:19:59.871079 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-11 04:19:59.871087 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-11 04:19:59.871095 | orchestrator | + CEPH_VERSION=reef 2026-04-11 04:19:59.871104 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-11 04:19:59.871112 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-11 04:19:59.871120 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-11 04:19:59.881067 | orchestrator | + set -e 2026-04-11 04:19:59.881128 | orchestrator | + VERSION=10.0.0 2026-04-11 04:19:59.881135 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-11 04:19:59.890168 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-11 04:19:59.890294 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-11 04:19:59.895253 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-11 04:19:59.899070 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-11 04:19:59.905379 | orchestrator | /opt/configuration ~ 2026-04-11 04:19:59.905477 | orchestrator | + set -e 2026-04-11 04:19:59.905489 | orchestrator | + pushd /opt/configuration 2026-04-11 04:19:59.905497 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 04:19:59.905508 | orchestrator | + source /opt/venv/bin/activate 2026-04-11 04:19:59.906789 | orchestrator | ++ deactivate nondestructive 2026-04-11 04:19:59.906867 | orchestrator | ++ '[' -n '' ']' 2026-04-11 04:19:59.906889 | orchestrator | ++ '[' -n '' ']' 2026-04-11 04:19:59.906900 | orchestrator | ++ hash -r 2026-04-11 04:19:59.906912 | orchestrator | ++ '[' -n '' ']' 2026-04-11 04:19:59.906919 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-11 04:19:59.906925 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-11 04:19:59.906931 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-11 04:19:59.907001 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-11 04:19:59.907014 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-11 04:19:59.907024 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-11 04:19:59.907035 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-11 04:19:59.907047 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 04:19:59.907057 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 04:19:59.907068 | orchestrator | ++ export PATH 2026-04-11 04:19:59.907084 | orchestrator | ++ '[' -n '' ']' 2026-04-11 04:19:59.907132 | orchestrator | ++ '[' -z '' ']' 2026-04-11 04:19:59.907144 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-11 04:19:59.907155 | orchestrator | ++ PS1='(venv) ' 2026-04-11 04:19:59.907171 | orchestrator | ++ export PS1 2026-04-11 04:19:59.907181 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-11 04:19:59.907191 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-11 04:19:59.907201 | orchestrator | ++ hash -r 2026-04-11 04:19:59.907341 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-11 04:20:01.220543 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-11 04:20:01.222101 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-11 04:20:01.223768 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-11 04:20:01.225714 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-11 04:20:01.227033 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-11 04:20:01.249610 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-11 04:20:01.252129 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-11 04:20:01.253638 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-11 04:20:01.256081 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-11 04:20:01.298100 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-11 04:20:01.299952 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-11 04:20:01.301904 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-11 04:20:01.303562 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-11 04:20:01.307669 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-11 04:20:01.566337 | orchestrator | ++ which gilt 2026-04-11 04:20:01.567951 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-11 04:20:01.568031 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-11 04:20:01.841818 | orchestrator | osism.cfg-generics: 2026-04-11 04:20:01.961853 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-11 04:20:01.962822 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-11 04:20:01.963926 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-11 04:20:01.963962 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-11 04:20:03.125132 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-11 04:20:03.134614 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-11 04:20:03.529046 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-11 04:20:03.586255 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 04:20:03.586342 | orchestrator | + deactivate 2026-04-11 04:20:03.586354 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-11 04:20:03.586364 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 04:20:03.586370 | orchestrator | + export PATH 2026-04-11 04:20:03.586377 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-11 04:20:03.586384 | orchestrator | + '[' -n '' ']' 2026-04-11 04:20:03.586390 | orchestrator | + hash -r 2026-04-11 04:20:03.586408 | orchestrator | ~ 2026-04-11 04:20:03.586415 | orchestrator | + '[' -n '' ']' 2026-04-11 04:20:03.586422 | orchestrator | + unset VIRTUAL_ENV 2026-04-11 04:20:03.586428 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-11 04:20:03.586446 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-11 04:20:03.586454 | orchestrator | + unset -f deactivate 2026-04-11 04:20:03.586458 | orchestrator | + popd 2026-04-11 04:20:03.588822 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-11 04:20:03.589043 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-11 04:20:03.596277 | orchestrator | + set -e 2026-04-11 04:20:03.596382 | orchestrator | + NAMESPACE=kolla/release 2026-04-11 04:20:03.596391 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-11 04:20:03.604098 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-11 04:20:03.609271 | orchestrator | /opt/configuration ~ 2026-04-11 04:20:03.609378 | orchestrator | + set -e 2026-04-11 04:20:03.609388 | orchestrator | + pushd /opt/configuration 2026-04-11 04:20:03.609398 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 04:20:03.609407 | orchestrator | + source /opt/venv/bin/activate 2026-04-11 04:20:03.609427 | orchestrator | ++ deactivate nondestructive 2026-04-11 04:20:03.609453 | orchestrator | ++ '[' -n '' ']' 2026-04-11 04:20:03.609469 | orchestrator | ++ '[' -n '' ']' 2026-04-11 04:20:03.609524 | orchestrator | ++ hash -r 2026-04-11 04:20:03.609542 | orchestrator | ++ '[' -n '' ']' 2026-04-11 04:20:03.609550 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-11 04:20:03.609564 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-11 04:20:03.609572 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-11 04:20:03.609742 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-11 04:20:03.609755 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-11 04:20:03.609770 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-11 04:20:03.609785 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-11 04:20:03.609804 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 04:20:03.609965 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 04:20:03.609976 | orchestrator | ++ export PATH 2026-04-11 04:20:03.609984 | orchestrator | ++ '[' -n '' ']' 2026-04-11 04:20:03.609999 | orchestrator | ++ '[' -z '' ']' 2026-04-11 04:20:03.610008 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-11 04:20:03.610053 | orchestrator | ++ PS1='(venv) ' 2026-04-11 04:20:03.610063 | orchestrator | ++ export PS1 2026-04-11 04:20:03.610071 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-11 04:20:03.610079 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-11 04:20:03.610092 | orchestrator | ++ hash -r 2026-04-11 04:20:03.610108 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-11 04:20:04.214840 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-11 04:20:04.217639 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-11 04:20:04.217709 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-11 04:20:04.218796 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-11 04:20:04.220135 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-11 04:20:04.232283 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-11 04:20:04.233883 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-11 04:20:04.235129 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-11 04:20:04.236708 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-11 04:20:04.282129 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-11 04:20:04.283757 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-11 04:20:04.285701 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-11 04:20:04.286958 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-11 04:20:04.291156 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-11 04:20:04.578006 | orchestrator | ++ which gilt 2026-04-11 04:20:04.579743 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-11 04:20:04.579790 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-11 04:20:04.817613 | orchestrator | osism.cfg-generics: 2026-04-11 04:20:04.902570 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-11 04:20:04.903543 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-11 04:20:04.904403 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-11 04:20:04.904605 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-11 04:20:05.542488 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-11 04:20:05.550805 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-11 04:20:05.958351 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-11 04:20:06.031319 | orchestrator | ~ 2026-04-11 04:20:06.031393 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-11 04:20:06.031405 | orchestrator | + deactivate 2026-04-11 04:20:06.031414 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-11 04:20:06.031425 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-11 04:20:06.031452 | orchestrator | + export PATH 2026-04-11 04:20:06.031460 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-11 04:20:06.031467 | orchestrator | + '[' -n '' ']' 2026-04-11 04:20:06.031473 | orchestrator | + hash -r 2026-04-11 04:20:06.031481 | orchestrator | + '[' -n '' ']' 2026-04-11 04:20:06.031489 | orchestrator | + unset VIRTUAL_ENV 2026-04-11 04:20:06.031496 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-11 04:20:06.031504 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-11 04:20:06.031510 | orchestrator | + unset -f deactivate 2026-04-11 04:20:06.031518 | orchestrator | + popd 2026-04-11 04:20:06.033538 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-04-11 04:20:06.080885 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-11 04:20:06.081643 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-11 04:20:06.153901 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 04:20:06.154011 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-11 04:20:06.163803 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-11 04:20:06.170781 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-04-11 04:20:06.231994 | orchestrator | ++ '[' -1 -le 0 ']' 2026-04-11 04:20:06.232160 | orchestrator | +++ semver 10.0.0 10.0.0-0 2026-04-11 04:20:06.310917 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-04-11 04:20:06.311002 | orchestrator | ++ echo true 2026-04-11 04:20:06.311066 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-04-11 04:20:06.312808 | orchestrator | +++ semver 2024.2 2024.2 2026-04-11 04:20:06.377972 | orchestrator | ++ '[' 0 -le 0 ']' 2026-04-11 04:20:06.378310 | orchestrator | +++ semver 2024.2 2025.1 2026-04-11 04:20:06.432973 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-04-11 04:20:06.433067 | orchestrator | ++ echo false 2026-04-11 04:20:06.433257 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-04-11 04:20:06.433274 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-11 04:20:06.433301 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-04-11 04:20:06.433364 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-04-11 04:20:06.433575 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-04-11 04:20:06.438393 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-04-11 04:20:06.438620 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-04-11 04:20:06.453394 | orchestrator | export RABBITMQ3TO4=true 2026-04-11 04:20:06.456532 | orchestrator | + osism update manager 2026-04-11 04:20:12.818071 | orchestrator | Collecting uv 2026-04-11 04:20:12.928157 | orchestrator | Downloading uv-0.11.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-04-11 04:20:12.950566 | orchestrator | Downloading uv-0.11.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25.0 MB) 2026-04-11 04:20:14.043667 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 25.0/25.0 MB 26.2 MB/s eta 0:00:00 2026-04-11 04:20:14.123836 | orchestrator | Installing collected packages: uv 2026-04-11 04:20:14.613165 | orchestrator | Successfully installed uv-0.11.6 2026-04-11 04:20:15.375803 | orchestrator | Resolved 11 packages in 367ms 2026-04-11 04:20:15.407557 | orchestrator | Downloading netaddr (2.2MiB) 2026-04-11 04:20:15.407655 | orchestrator | Downloading cryptography (4.3MiB) 2026-04-11 04:20:15.407688 | orchestrator | Downloading ansible-core (2.1MiB) 2026-04-11 04:20:15.454780 | orchestrator | Downloading ansible (54.5MiB) 2026-04-11 04:20:15.685553 | orchestrator | Downloaded netaddr 2026-04-11 04:20:15.793291 | orchestrator | Downloaded cryptography 2026-04-11 04:20:15.945843 | orchestrator | Downloaded ansible-core 2026-04-11 04:20:23.829955 | orchestrator | Downloaded ansible 2026-04-11 04:20:23.830348 | orchestrator | Prepared 11 packages in 8.45s 2026-04-11 04:20:24.417385 | orchestrator | Installed 11 packages in 585ms 2026-04-11 04:20:24.417546 | orchestrator | + ansible==11.11.0 2026-04-11 04:20:24.417562 | orchestrator | + ansible-core==2.18.15 2026-04-11 04:20:24.417573 | orchestrator | + cffi==2.0.0 2026-04-11 04:20:24.417584 | orchestrator | + cryptography==46.0.7 2026-04-11 04:20:24.417595 | orchestrator | + jinja2==3.1.6 2026-04-11 04:20:24.417929 | orchestrator | + markupsafe==3.0.3 2026-04-11 04:20:24.417943 | orchestrator | + netaddr==1.3.0 2026-04-11 04:20:24.417953 | orchestrator | + packaging==26.0 2026-04-11 04:20:24.417963 | orchestrator | + pycparser==3.0 2026-04-11 04:20:24.417972 | orchestrator | + pyyaml==6.0.3 2026-04-11 04:20:24.417985 | orchestrator | + resolvelib==1.0.1 2026-04-11 04:20:25.795993 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-2061190mwrrz6i/tmpy80c1xn4/ansible-collection-serviceslj3s6xi5'... 2026-04-11 04:20:27.189561 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-11 04:20:27.189647 | orchestrator | Already on 'main' 2026-04-11 04:20:27.752804 | orchestrator | Starting galaxy collection install process 2026-04-11 04:20:27.752888 | orchestrator | Process install dependency map 2026-04-11 04:20:27.752897 | orchestrator | Starting collection install process 2026-04-11 04:20:27.752906 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-04-11 04:20:27.752914 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-04-11 04:20:27.752921 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-11 04:20:28.345723 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-206157ihizf1d4/tmpwst9bs0o/ansible-playbooks-manager515_6uka'... 2026-04-11 04:20:28.941578 | orchestrator | Already on 'main' 2026-04-11 04:20:28.941736 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-11 04:20:29.248881 | orchestrator | Starting galaxy collection install process 2026-04-11 04:20:29.248990 | orchestrator | Process install dependency map 2026-04-11 04:20:29.249009 | orchestrator | Starting collection install process 2026-04-11 04:20:29.249023 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-04-11 04:20:29.249036 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-04-11 04:20:29.249047 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-04-11 04:20:29.967416 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-11 04:20:29.967574 | orchestrator | -vvvv to see details 2026-04-11 04:20:30.513884 | orchestrator | 2026-04-11 04:20:30.513981 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-04-11 04:20:30.513994 | orchestrator | 2026-04-11 04:20:30.514075 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-11 04:20:34.787911 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:34.788041 | orchestrator | 2026-04-11 04:20:34.788062 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-11 04:20:34.854100 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 04:20:34.854201 | orchestrator | 2026-04-11 04:20:34.854218 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-11 04:20:36.856098 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:36.856242 | orchestrator | 2026-04-11 04:20:36.856263 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-11 04:20:36.914554 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:36.914667 | orchestrator | 2026-04-11 04:20:36.914683 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-11 04:20:36.997747 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-11 04:20:36.997849 | orchestrator | 2026-04-11 04:20:36.997863 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-11 04:20:41.444030 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-04-11 04:20:41.444143 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-04-11 04:20:41.444159 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-11 04:20:41.444183 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-04-11 04:20:41.444194 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-11 04:20:41.444205 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-11 04:20:41.444217 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-11 04:20:41.444228 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-04-11 04:20:41.444239 | orchestrator | 2026-04-11 04:20:41.444251 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-11 04:20:42.544524 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:42.544620 | orchestrator | 2026-04-11 04:20:42.544635 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-11 04:20:43.517071 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:43.517178 | orchestrator | 2026-04-11 04:20:43.517194 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-11 04:20:43.609811 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-11 04:20:43.609889 | orchestrator | 2026-04-11 04:20:43.609899 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-11 04:20:45.483186 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-04-11 04:20:45.483302 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-04-11 04:20:45.483316 | orchestrator | 2026-04-11 04:20:45.483327 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-11 04:20:46.441010 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:46.441128 | orchestrator | 2026-04-11 04:20:46.441148 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-11 04:20:46.515354 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:20:46.515508 | orchestrator | 2026-04-11 04:20:46.515525 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-11 04:20:46.611091 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-11 04:20:46.611175 | orchestrator | 2026-04-11 04:20:46.611187 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-11 04:20:47.578916 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:47.578992 | orchestrator | 2026-04-11 04:20:47.579000 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-11 04:20:47.646763 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-11 04:20:47.646855 | orchestrator | 2026-04-11 04:20:47.646904 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-11 04:20:49.683462 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-11 04:20:49.683588 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-11 04:20:49.683610 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:49.683661 | orchestrator | 2026-04-11 04:20:49.683671 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-11 04:20:50.665770 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:50.665867 | orchestrator | 2026-04-11 04:20:50.665880 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-11 04:20:50.722426 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:20:50.722586 | orchestrator | 2026-04-11 04:20:50.722608 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-11 04:20:50.857767 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-11 04:20:50.857845 | orchestrator | 2026-04-11 04:20:50.857856 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-11 04:20:51.615396 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:51.615547 | orchestrator | 2026-04-11 04:20:51.615565 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-11 04:20:52.229015 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:52.229115 | orchestrator | 2026-04-11 04:20:52.229149 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-11 04:20:54.134207 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-04-11 04:20:54.134309 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-04-11 04:20:54.134321 | orchestrator | 2026-04-11 04:20:54.134327 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-11 04:20:55.377577 | orchestrator | changed: [testbed-manager] 2026-04-11 04:20:55.377711 | orchestrator | 2026-04-11 04:20:55.377729 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-11 04:20:55.944793 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:55.944898 | orchestrator | 2026-04-11 04:20:55.944915 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-11 04:20:56.504158 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:56.504263 | orchestrator | 2026-04-11 04:20:56.504276 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-11 04:20:56.556142 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:20:56.556220 | orchestrator | 2026-04-11 04:20:56.556229 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-11 04:20:56.640423 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-11 04:20:56.640541 | orchestrator | 2026-04-11 04:20:56.640548 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-11 04:20:56.711333 | orchestrator | ok: [testbed-manager] 2026-04-11 04:20:56.711473 | orchestrator | 2026-04-11 04:20:56.711501 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-11 04:20:59.837599 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-04-11 04:20:59.837704 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-04-11 04:20:59.837713 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-04-11 04:20:59.837718 | orchestrator | 2026-04-11 04:20:59.837724 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-11 04:21:00.909537 | orchestrator | ok: [testbed-manager] 2026-04-11 04:21:00.909654 | orchestrator | 2026-04-11 04:21:00.909669 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-11 04:21:01.996629 | orchestrator | ok: [testbed-manager] 2026-04-11 04:21:01.996712 | orchestrator | 2026-04-11 04:21:01.996722 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-11 04:21:03.028400 | orchestrator | ok: [testbed-manager] 2026-04-11 04:21:03.028554 | orchestrator | 2026-04-11 04:21:03.028568 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-11 04:21:03.116634 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-11 04:21:03.116724 | orchestrator | 2026-04-11 04:21:03.116734 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-11 04:21:03.176748 | orchestrator | ok: [testbed-manager] 2026-04-11 04:21:03.176818 | orchestrator | 2026-04-11 04:21:03.176824 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-11 04:21:04.212169 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-04-11 04:21:04.212244 | orchestrator | 2026-04-11 04:21:04.212251 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-11 04:21:04.291138 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-11 04:21:04.291215 | orchestrator | 2026-04-11 04:21:04.291224 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-11 04:21:05.310871 | orchestrator | ok: [testbed-manager] 2026-04-11 04:21:05.310993 | orchestrator | 2026-04-11 04:21:05.311013 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-11 04:21:06.570235 | orchestrator | ok: [testbed-manager] 2026-04-11 04:21:06.570356 | orchestrator | 2026-04-11 04:21:06.570381 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-11 04:21:06.652941 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:21:06.653060 | orchestrator | 2026-04-11 04:21:06.653086 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-11 04:21:06.715427 | orchestrator | ok: [testbed-manager] 2026-04-11 04:21:06.715628 | orchestrator | 2026-04-11 04:21:06.715649 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-11 04:21:08.183876 | orchestrator | changed: [testbed-manager] 2026-04-11 04:21:08.183980 | orchestrator | 2026-04-11 04:21:08.183996 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-11 04:22:24.642774 | orchestrator | changed: [testbed-manager] 2026-04-11 04:22:24.642899 | orchestrator | 2026-04-11 04:22:24.642919 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-11 04:22:26.025552 | orchestrator | ok: [testbed-manager] 2026-04-11 04:22:26.025670 | orchestrator | 2026-04-11 04:22:26.025689 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-11 04:22:26.091818 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:22:26.091897 | orchestrator | 2026-04-11 04:22:26.091905 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-11 04:22:26.944069 | orchestrator | ok: [testbed-manager] 2026-04-11 04:22:26.944185 | orchestrator | 2026-04-11 04:22:26.944205 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-11 04:22:27.025527 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:22:27.025617 | orchestrator | 2026-04-11 04:22:27.025629 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-11 04:22:27.025639 | orchestrator | 2026-04-11 04:22:27.025647 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-11 04:22:41.708894 | orchestrator | changed: [testbed-manager] 2026-04-11 04:22:41.709049 | orchestrator | 2026-04-11 04:22:41.709065 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-11 04:23:41.780983 | orchestrator | Pausing for 60 seconds 2026-04-11 04:23:41.781116 | orchestrator | changed: [testbed-manager] 2026-04-11 04:23:41.781140 | orchestrator | 2026-04-11 04:23:41.781154 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-04-11 04:23:41.848694 | orchestrator | ok: [testbed-manager] 2026-04-11 04:23:41.848790 | orchestrator | 2026-04-11 04:23:41.848802 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-11 04:23:46.020848 | orchestrator | changed: [testbed-manager] 2026-04-11 04:23:46.020940 | orchestrator | 2026-04-11 04:23:46.020950 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-11 04:24:48.865986 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-11 04:24:48.866138 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-11 04:24:48.866150 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-11 04:24:48.866159 | orchestrator | changed: [testbed-manager] 2026-04-11 04:24:48.866171 | orchestrator | 2026-04-11 04:24:48.866186 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-11 04:24:56.212224 | orchestrator | changed: [testbed-manager] 2026-04-11 04:24:56.212332 | orchestrator | 2026-04-11 04:24:56.212348 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-11 04:24:56.322459 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-11 04:24:56.322647 | orchestrator | 2026-04-11 04:24:56.322663 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-11 04:24:56.322670 | orchestrator | 2026-04-11 04:24:56.322677 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-11 04:24:56.388845 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:24:56.388948 | orchestrator | 2026-04-11 04:24:56.388964 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-11 04:24:56.459681 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-11 04:24:56.459780 | orchestrator | 2026-04-11 04:24:56.459793 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-11 04:24:57.585618 | orchestrator | changed: [testbed-manager] 2026-04-11 04:24:57.585718 | orchestrator | 2026-04-11 04:24:57.585729 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-11 04:25:01.351594 | orchestrator | ok: [testbed-manager] 2026-04-11 04:25:01.351724 | orchestrator | 2026-04-11 04:25:01.351756 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-11 04:25:01.435081 | orchestrator | ok: [testbed-manager] => { 2026-04-11 04:25:01.435156 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-11 04:25:01.435165 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-11 04:25:01.435171 | orchestrator | "Checking running containers against expected versions...", 2026-04-11 04:25:01.435178 | orchestrator | "", 2026-04-11 04:25:01.435183 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-11 04:25:01.435189 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-11 04:25:01.435195 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435200 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-11 04:25:01.435205 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435210 | orchestrator | "", 2026-04-11 04:25:01.435215 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-11 04:25:01.435220 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-11 04:25:01.435225 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435230 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-11 04:25:01.435235 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435240 | orchestrator | "", 2026-04-11 04:25:01.435245 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-11 04:25:01.435250 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-11 04:25:01.435255 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435260 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-11 04:25:01.435265 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435269 | orchestrator | "", 2026-04-11 04:25:01.435274 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-11 04:25:01.435283 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-11 04:25:01.435291 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435300 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-11 04:25:01.435307 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435315 | orchestrator | "", 2026-04-11 04:25:01.435323 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-11 04:25:01.435331 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-11 04:25:01.435339 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435346 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-11 04:25:01.435354 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435363 | orchestrator | "", 2026-04-11 04:25:01.435370 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-11 04:25:01.435378 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435421 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435429 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435435 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435443 | orchestrator | "", 2026-04-11 04:25:01.435451 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-11 04:25:01.435459 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-11 04:25:01.435467 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435474 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-11 04:25:01.435482 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435489 | orchestrator | "", 2026-04-11 04:25:01.435497 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-11 04:25:01.435504 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-11 04:25:01.435511 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435520 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-11 04:25:01.435528 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435614 | orchestrator | "", 2026-04-11 04:25:01.435626 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-11 04:25:01.435635 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-11 04:25:01.435643 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435651 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-11 04:25:01.435659 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435667 | orchestrator | "", 2026-04-11 04:25:01.435679 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-11 04:25:01.435688 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-11 04:25:01.435697 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435705 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-11 04:25:01.435712 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435718 | orchestrator | "", 2026-04-11 04:25:01.435723 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-11 04:25:01.435729 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435735 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435741 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435746 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435752 | orchestrator | "", 2026-04-11 04:25:01.435757 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-11 04:25:01.435763 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435769 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435774 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435780 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435785 | orchestrator | "", 2026-04-11 04:25:01.435791 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-11 04:25:01.435797 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435802 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435808 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435813 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435819 | orchestrator | "", 2026-04-11 04:25:01.435825 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-11 04:25:01.435830 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435836 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435842 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435862 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435868 | orchestrator | "", 2026-04-11 04:25:01.435873 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-11 04:25:01.435879 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435885 | orchestrator | " Enabled: true", 2026-04-11 04:25:01.435899 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-11 04:25:01.435905 | orchestrator | " Status: ✅ MATCH", 2026-04-11 04:25:01.435911 | orchestrator | "", 2026-04-11 04:25:01.435916 | orchestrator | "=== Summary ===", 2026-04-11 04:25:01.435922 | orchestrator | "Errors (version mismatches): 0", 2026-04-11 04:25:01.435928 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-11 04:25:01.435933 | orchestrator | "", 2026-04-11 04:25:01.435939 | orchestrator | "✅ All running containers match expected versions!" 2026-04-11 04:25:01.435945 | orchestrator | ] 2026-04-11 04:25:01.435951 | orchestrator | } 2026-04-11 04:25:01.435957 | orchestrator | 2026-04-11 04:25:01.435963 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-11 04:25:01.494384 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:25:01.494490 | orchestrator | 2026-04-11 04:25:01.494505 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:25:01.494518 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-04-11 04:25:01.494528 | orchestrator | 2026-04-11 04:25:14.870622 | orchestrator | 2026-04-11 04:25:14 | INFO  | Task eaaa8d9b-bce6-402d-9a24-5639ad697337 (sync inventory) is running in background. Output coming soon. 2026-04-11 04:25:50.855827 | orchestrator | 2026-04-11 04:25:16 | INFO  | Starting group_vars file reorganization 2026-04-11 04:25:50.855917 | orchestrator | 2026-04-11 04:25:16 | INFO  | Moved 0 file(s) to their respective directories 2026-04-11 04:25:50.855926 | orchestrator | 2026-04-11 04:25:16 | INFO  | Group_vars file reorganization completed 2026-04-11 04:25:50.855933 | orchestrator | 2026-04-11 04:25:19 | INFO  | Starting variable preparation from inventory 2026-04-11 04:25:50.855939 | orchestrator | 2026-04-11 04:25:23 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-11 04:25:50.855945 | orchestrator | 2026-04-11 04:25:23 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-11 04:25:50.855951 | orchestrator | 2026-04-11 04:25:23 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-11 04:25:50.855956 | orchestrator | 2026-04-11 04:25:23 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-11 04:25:50.855962 | orchestrator | 2026-04-11 04:25:23 | INFO  | Variable preparation completed 2026-04-11 04:25:50.855967 | orchestrator | 2026-04-11 04:25:25 | INFO  | Starting inventory overwrite handling 2026-04-11 04:25:50.855973 | orchestrator | 2026-04-11 04:25:25 | INFO  | Handling group overwrites in 99-overwrite 2026-04-11 04:25:50.855978 | orchestrator | 2026-04-11 04:25:25 | INFO  | Removing group frr:children from 60-generic 2026-04-11 04:25:50.855984 | orchestrator | 2026-04-11 04:25:25 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-11 04:25:50.855989 | orchestrator | 2026-04-11 04:25:25 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-11 04:25:50.855995 | orchestrator | 2026-04-11 04:25:25 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-11 04:25:50.856000 | orchestrator | 2026-04-11 04:25:25 | INFO  | Handling group overwrites in 20-roles 2026-04-11 04:25:50.856006 | orchestrator | 2026-04-11 04:25:25 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-11 04:25:50.856011 | orchestrator | 2026-04-11 04:25:25 | INFO  | Removed 5 group(s) in total 2026-04-11 04:25:50.856017 | orchestrator | 2026-04-11 04:25:25 | INFO  | Inventory overwrite handling completed 2026-04-11 04:25:50.856022 | orchestrator | 2026-04-11 04:25:26 | INFO  | Starting merge of inventory files 2026-04-11 04:25:50.856027 | orchestrator | 2026-04-11 04:25:26 | INFO  | Inventory files merged successfully 2026-04-11 04:25:50.856033 | orchestrator | 2026-04-11 04:25:32 | INFO  | Generating minified hosts file 2026-04-11 04:25:50.856058 | orchestrator | 2026-04-11 04:25:34 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-11 04:25:50.856075 | orchestrator | 2026-04-11 04:25:34 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-11 04:25:50.856080 | orchestrator | 2026-04-11 04:25:35 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-11 04:25:50.856086 | orchestrator | 2026-04-11 04:25:49 | INFO  | Successfully wrote ClusterShell configuration 2026-04-11 04:25:51.110406 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 04:25:51.110582 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-11 04:25:51.110615 | orchestrator | + local max_attempts=60 2026-04-11 04:25:51.110630 | orchestrator | + local name=kolla-ansible 2026-04-11 04:25:51.110641 | orchestrator | + local attempt_num=1 2026-04-11 04:25:51.111277 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-11 04:25:51.150337 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 04:25:51.150464 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-11 04:25:51.150556 | orchestrator | + local max_attempts=60 2026-04-11 04:25:51.150580 | orchestrator | + local name=osism-ansible 2026-04-11 04:25:51.150601 | orchestrator | + local attempt_num=1 2026-04-11 04:25:51.150888 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-11 04:25:51.195967 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-11 04:25:51.196061 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-11 04:25:51.375726 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-11 04:25:51.375824 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-11 04:25:51.375836 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-11 04:25:51.375843 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-11 04:25:51.375865 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 3 hours ago Up 2 minutes (healthy) 8000/tcp 2026-04-11 04:25:51.375871 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-04-11 04:25:51.375877 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-04-11 04:25:51.375883 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-04-11 04:25:51.375889 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 34 seconds ago 2026-04-11 04:25:51.375895 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 3 hours ago Up 3 minutes (healthy) 3306/tcp 2026-04-11 04:25:51.375901 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-04-11 04:25:51.375927 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 3 hours ago Up 3 minutes (healthy) 6379/tcp 2026-04-11 04:25:51.375933 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-11 04:25:51.375939 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-04-11 04:25:51.375945 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-04-11 04:25:51.375951 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-04-11 04:25:51.383524 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-04-11 04:25:51.383628 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-04-11 04:25:51.383642 | orchestrator | + osism apply facts 2026-04-11 04:26:02.970849 | orchestrator | 2026-04-11 04:26:02 | INFO  | Prepare task for execution of facts. 2026-04-11 04:26:03.072609 | orchestrator | 2026-04-11 04:26:03 | INFO  | Task e75d258b-2b9e-4714-a413-d3ddc8358e75 (facts) was prepared for execution. 2026-04-11 04:26:03.072713 | orchestrator | 2026-04-11 04:26:03 | INFO  | It takes a moment until task e75d258b-2b9e-4714-a413-d3ddc8358e75 (facts) has been started and output is visible here. 2026-04-11 04:26:29.945984 | orchestrator | 2026-04-11 04:26:29.946115 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-11 04:26:29.946178 | orchestrator | 2026-04-11 04:26:29.946187 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 04:26:29.946194 | orchestrator | Saturday 11 April 2026 04:26:09 +0000 (0:00:02.693) 0:00:02.694 ******** 2026-04-11 04:26:29.946199 | orchestrator | ok: [testbed-manager] 2026-04-11 04:26:29.946206 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:26:29.946213 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:26:29.946219 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:26:29.946225 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:26:29.946231 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:26:29.946237 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:26:29.946242 | orchestrator | 2026-04-11 04:26:29.946248 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 04:26:29.946335 | orchestrator | Saturday 11 April 2026 04:26:13 +0000 (0:00:03.763) 0:00:06.457 ******** 2026-04-11 04:26:29.946345 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:26:29.946351 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:26:29.946358 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:26:29.946364 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:26:29.946370 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:26:29.946376 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:26:29.946381 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:26:29.946387 | orchestrator | 2026-04-11 04:26:29.946394 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 04:26:29.946399 | orchestrator | 2026-04-11 04:26:29.946405 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 04:26:29.946431 | orchestrator | Saturday 11 April 2026 04:26:17 +0000 (0:00:04.407) 0:00:10.864 ******** 2026-04-11 04:26:29.946438 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:26:29.946444 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:26:29.946450 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:26:29.946456 | orchestrator | ok: [testbed-manager] 2026-04-11 04:26:29.946461 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:26:29.946467 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:26:29.946510 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:26:29.946516 | orchestrator | 2026-04-11 04:26:29.946522 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 04:26:29.946528 | orchestrator | 2026-04-11 04:26:29.946534 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 04:26:29.946541 | orchestrator | Saturday 11 April 2026 04:26:26 +0000 (0:00:08.028) 0:00:18.893 ******** 2026-04-11 04:26:29.946547 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:26:29.946554 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:26:29.946571 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:26:29.946577 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:26:29.946623 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:26:29.946641 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:26:29.946647 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:26:29.946653 | orchestrator | 2026-04-11 04:26:29.946660 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:26:29.946667 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:26:29.946675 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:26:29.946682 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:26:29.946688 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:26:29.946694 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:26:29.946700 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:26:29.946706 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 04:26:29.946712 | orchestrator | 2026-04-11 04:26:29.946718 | orchestrator | 2026-04-11 04:26:29.946725 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:26:29.946732 | orchestrator | Saturday 11 April 2026 04:26:29 +0000 (0:00:03.467) 0:00:22.361 ******** 2026-04-11 04:26:29.946739 | orchestrator | =============================================================================== 2026-04-11 04:26:29.946744 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.03s 2026-04-11 04:26:29.946748 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 4.41s 2026-04-11 04:26:29.946870 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.76s 2026-04-11 04:26:29.946876 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.47s 2026-04-11 04:26:30.182247 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-11 04:26:30.254474 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 04:26:30.254742 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-11 04:26:30.289847 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-04-11 04:26:30.289962 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-04-11 04:26:30.294698 | orchestrator | + set -e 2026-04-11 04:26:30.294755 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-04-11 04:26:30.294765 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-11 04:26:30.302820 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-04-11 04:26:30.311234 | orchestrator | 2026-04-11 04:26:30.311381 | orchestrator | # UPGRADE SERVICES 2026-04-11 04:26:30.311549 | orchestrator | 2026-04-11 04:26:30.311581 | orchestrator | + set -e 2026-04-11 04:26:30.311610 | orchestrator | + echo 2026-04-11 04:26:30.311632 | orchestrator | + echo '# UPGRADE SERVICES' 2026-04-11 04:26:30.311701 | orchestrator | + echo 2026-04-11 04:26:30.311728 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 04:26:30.312384 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 04:26:30.312483 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 04:26:30.312495 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 04:26:30.312506 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 04:26:30.312517 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 04:26:30.312529 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 04:26:30.312540 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 04:26:30.312551 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 04:26:30.312561 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 04:26:30.312572 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 04:26:30.312583 | orchestrator | ++ export ARA=false 2026-04-11 04:26:30.312594 | orchestrator | ++ ARA=false 2026-04-11 04:26:30.312604 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 04:26:30.312615 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 04:26:30.312626 | orchestrator | ++ export TEMPEST=false 2026-04-11 04:26:30.312636 | orchestrator | ++ TEMPEST=false 2026-04-11 04:26:30.312646 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 04:26:30.312657 | orchestrator | ++ IS_ZUUL=true 2026-04-11 04:26:30.312668 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:26:30.312679 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:26:30.312689 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 04:26:30.312700 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 04:26:30.312710 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 04:26:30.312720 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 04:26:30.312731 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 04:26:30.312742 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 04:26:30.312752 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 04:26:30.312763 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 04:26:30.312774 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-11 04:26:30.312785 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-11 04:26:30.312795 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-04-11 04:26:30.312806 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-04-11 04:26:30.312817 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-11 04:26:30.318170 | orchestrator | + set -e 2026-04-11 04:26:30.318227 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 04:26:30.319699 | orchestrator | 2026-04-11 04:26:30.319735 | orchestrator | # PULL IMAGES 2026-04-11 04:26:30.319776 | orchestrator | 2026-04-11 04:26:30.319796 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 04:26:30.319805 | orchestrator | ++ INTERACTIVE=false 2026-04-11 04:26:30.319813 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 04:26:30.319821 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 04:26:30.319829 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 04:26:30.319837 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 04:26:30.319845 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 04:26:30.319853 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 04:26:30.319860 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 04:26:30.319868 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 04:26:30.319877 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 04:26:30.319884 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 04:26:30.319892 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 04:26:30.319901 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 04:26:30.319909 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 04:26:30.319918 | orchestrator | ++ export ARA=false 2026-04-11 04:26:30.319925 | orchestrator | ++ ARA=false 2026-04-11 04:26:30.319932 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 04:26:30.319941 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 04:26:30.319948 | orchestrator | ++ export TEMPEST=false 2026-04-11 04:26:30.319957 | orchestrator | ++ TEMPEST=false 2026-04-11 04:26:30.319966 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 04:26:30.319975 | orchestrator | ++ IS_ZUUL=true 2026-04-11 04:26:30.319983 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:26:30.319991 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 04:26:30.319999 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 04:26:30.320008 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 04:26:30.320016 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 04:26:30.320023 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 04:26:30.320031 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 04:26:30.320038 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 04:26:30.320046 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 04:26:30.320055 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 04:26:30.320063 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-11 04:26:30.320096 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-11 04:26:30.320104 | orchestrator | + echo 2026-04-11 04:26:30.320112 | orchestrator | + echo '# PULL IMAGES' 2026-04-11 04:26:30.320120 | orchestrator | + echo 2026-04-11 04:26:30.320175 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-11 04:26:30.373715 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 04:26:30.373784 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-11 04:26:31.750172 | orchestrator | 2026-04-11 04:26:31 | INFO  | Trying to run play pull-images in environment custom 2026-04-11 04:26:41.851115 | orchestrator | 2026-04-11 04:26:41 | INFO  | Prepare task for execution of pull-images. 2026-04-11 04:26:41.949651 | orchestrator | 2026-04-11 04:26:41 | INFO  | Task 00197693-c8da-4c24-be49-e1b2109a5f50 (pull-images) was prepared for execution. 2026-04-11 04:26:41.949739 | orchestrator | 2026-04-11 04:26:41 | INFO  | Task 00197693-c8da-4c24-be49-e1b2109a5f50 is running in background. No more output. Check ARA for logs. 2026-04-11 04:26:42.229147 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-04-11 04:26:42.235004 | orchestrator | + set -e 2026-04-11 04:26:42.235085 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 04:26:42.235096 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 04:26:42.235103 | orchestrator | ++ INTERACTIVE=false 2026-04-11 04:26:42.235109 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 04:26:42.235115 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 04:26:42.235122 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 04:26:42.236754 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 04:26:42.247361 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 04:26:42.247498 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 04:26:42.248588 | orchestrator | ++ semver 10.0.0 8.0.3 2026-04-11 04:26:42.300493 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 04:26:42.300595 | orchestrator | + osism apply frr 2026-04-11 04:26:53.845909 | orchestrator | 2026-04-11 04:26:53 | INFO  | Prepare task for execution of frr. 2026-04-11 04:26:53.947593 | orchestrator | 2026-04-11 04:26:53 | INFO  | Task bc95ea19-f8f5-4ac5-b957-8ac7587b7e6c (frr) was prepared for execution. 2026-04-11 04:26:53.948599 | orchestrator | 2026-04-11 04:26:53 | INFO  | It takes a moment until task bc95ea19-f8f5-4ac5-b957-8ac7587b7e6c (frr) has been started and output is visible here. 2026-04-11 04:27:33.926663 | orchestrator | 2026-04-11 04:27:33.926764 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-11 04:27:33.926775 | orchestrator | 2026-04-11 04:27:33.926783 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-11 04:27:33.926792 | orchestrator | Saturday 11 April 2026 04:27:02 +0000 (0:00:04.492) 0:00:04.492 ******** 2026-04-11 04:27:33.926801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 04:27:33.926810 | orchestrator | 2026-04-11 04:27:33.926818 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-11 04:27:33.926826 | orchestrator | Saturday 11 April 2026 04:27:06 +0000 (0:00:03.348) 0:00:07.840 ******** 2026-04-11 04:27:33.926834 | orchestrator | ok: [testbed-manager] 2026-04-11 04:27:33.926843 | orchestrator | 2026-04-11 04:27:33.926850 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-11 04:27:33.926858 | orchestrator | Saturday 11 April 2026 04:27:08 +0000 (0:00:02.641) 0:00:10.482 ******** 2026-04-11 04:27:33.926866 | orchestrator | ok: [testbed-manager] 2026-04-11 04:27:33.926874 | orchestrator | 2026-04-11 04:27:33.926890 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-11 04:27:33.926898 | orchestrator | Saturday 11 April 2026 04:27:12 +0000 (0:00:03.228) 0:00:13.711 ******** 2026-04-11 04:27:33.926906 | orchestrator | ok: [testbed-manager] 2026-04-11 04:27:33.926914 | orchestrator | 2026-04-11 04:27:33.926921 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-11 04:27:33.926928 | orchestrator | Saturday 11 April 2026 04:27:14 +0000 (0:00:02.107) 0:00:15.818 ******** 2026-04-11 04:27:33.926955 | orchestrator | ok: [testbed-manager] 2026-04-11 04:27:33.926963 | orchestrator | 2026-04-11 04:27:33.926971 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-11 04:27:33.926978 | orchestrator | Saturday 11 April 2026 04:27:16 +0000 (0:00:02.145) 0:00:17.963 ******** 2026-04-11 04:27:33.926985 | orchestrator | ok: [testbed-manager] 2026-04-11 04:27:33.926993 | orchestrator | 2026-04-11 04:27:33.927000 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-11 04:27:33.927012 | orchestrator | Saturday 11 April 2026 04:27:18 +0000 (0:00:02.631) 0:00:20.595 ******** 2026-04-11 04:27:33.927019 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:27:33.927027 | orchestrator | 2026-04-11 04:27:33.927034 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-11 04:27:33.927041 | orchestrator | Saturday 11 April 2026 04:27:20 +0000 (0:00:01.281) 0:00:21.876 ******** 2026-04-11 04:27:33.927049 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:27:33.927056 | orchestrator | 2026-04-11 04:27:33.927063 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-11 04:27:33.927071 | orchestrator | Saturday 11 April 2026 04:27:21 +0000 (0:00:01.239) 0:00:23.115 ******** 2026-04-11 04:27:33.927078 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:27:33.927085 | orchestrator | 2026-04-11 04:27:33.927092 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-11 04:27:33.927100 | orchestrator | Saturday 11 April 2026 04:27:22 +0000 (0:00:01.244) 0:00:24.360 ******** 2026-04-11 04:27:33.927107 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:27:33.927114 | orchestrator | 2026-04-11 04:27:33.927121 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-11 04:27:33.927128 | orchestrator | Saturday 11 April 2026 04:27:23 +0000 (0:00:01.202) 0:00:25.563 ******** 2026-04-11 04:27:33.927135 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:27:33.927142 | orchestrator | 2026-04-11 04:27:33.927149 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-11 04:27:33.927157 | orchestrator | Saturday 11 April 2026 04:27:25 +0000 (0:00:01.271) 0:00:26.834 ******** 2026-04-11 04:27:33.927165 | orchestrator | ok: [testbed-manager] 2026-04-11 04:27:33.927173 | orchestrator | 2026-04-11 04:27:33.927180 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-11 04:27:33.927187 | orchestrator | Saturday 11 April 2026 04:27:27 +0000 (0:00:02.107) 0:00:28.942 ******** 2026-04-11 04:27:33.927193 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-11 04:27:33.927201 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-11 04:27:33.927211 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-11 04:27:33.927218 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-11 04:27:33.927227 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-11 04:27:33.927233 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-11 04:27:33.927237 | orchestrator | 2026-04-11 04:27:33.927242 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-11 04:27:33.927247 | orchestrator | Saturday 11 April 2026 04:27:30 +0000 (0:00:03.650) 0:00:32.592 ******** 2026-04-11 04:27:33.927251 | orchestrator | ok: [testbed-manager] 2026-04-11 04:27:33.927256 | orchestrator | 2026-04-11 04:27:33.927260 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:27:33.927265 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 04:27:33.927270 | orchestrator | 2026-04-11 04:27:33.927274 | orchestrator | 2026-04-11 04:27:33.927279 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:27:33.927290 | orchestrator | Saturday 11 April 2026 04:27:33 +0000 (0:00:02.635) 0:00:35.227 ******** 2026-04-11 04:27:33.927295 | orchestrator | =============================================================================== 2026-04-11 04:27:33.927356 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.65s 2026-04-11 04:27:33.927370 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 3.35s 2026-04-11 04:27:33.927375 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.23s 2026-04-11 04:27:33.927386 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.64s 2026-04-11 04:27:33.927391 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.64s 2026-04-11 04:27:33.927395 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.63s 2026-04-11 04:27:33.927400 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 2.15s 2026-04-11 04:27:33.927404 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.11s 2026-04-11 04:27:33.927409 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 2.11s 2026-04-11 04:27:33.927413 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 1.28s 2026-04-11 04:27:33.927417 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.27s 2026-04-11 04:27:33.927422 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 1.24s 2026-04-11 04:27:33.927426 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 1.24s 2026-04-11 04:27:33.927431 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.20s 2026-04-11 04:27:34.177968 | orchestrator | + osism apply kubernetes 2026-04-11 04:27:35.623511 | orchestrator | 2026-04-11 04:27:35 | INFO  | Prepare task for execution of kubernetes. 2026-04-11 04:27:35.698714 | orchestrator | 2026-04-11 04:27:35 | INFO  | Task 3ea92ee6-0233-4a04-8bc2-4d5f03893550 (kubernetes) was prepared for execution. 2026-04-11 04:27:35.698834 | orchestrator | 2026-04-11 04:27:35 | INFO  | It takes a moment until task 3ea92ee6-0233-4a04-8bc2-4d5f03893550 (kubernetes) has been started and output is visible here. 2026-04-11 04:28:23.161908 | orchestrator | 2026-04-11 04:28:23.162094 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-11 04:28:23.162119 | orchestrator | 2026-04-11 04:28:23.162131 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-11 04:28:23.162144 | orchestrator | Saturday 11 April 2026 04:27:42 +0000 (0:00:02.282) 0:00:02.282 ******** 2026-04-11 04:28:23.162156 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:28:23.162168 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:28:23.162179 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:28:23.162191 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:28:23.162202 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:28:23.162213 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:28:23.162303 | orchestrator | 2026-04-11 04:28:23.162327 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-11 04:28:23.162339 | orchestrator | Saturday 11 April 2026 04:27:47 +0000 (0:00:05.356) 0:00:07.639 ******** 2026-04-11 04:28:23.162351 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.162363 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.162374 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.162386 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.162399 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.162411 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.162424 | orchestrator | 2026-04-11 04:28:23.162437 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-11 04:28:23.162449 | orchestrator | Saturday 11 April 2026 04:27:49 +0000 (0:00:02.223) 0:00:09.862 ******** 2026-04-11 04:28:23.162462 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.162500 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.162513 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.162525 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.162538 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.162550 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.162563 | orchestrator | 2026-04-11 04:28:23.162575 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-11 04:28:23.162588 | orchestrator | Saturday 11 April 2026 04:27:51 +0000 (0:00:02.100) 0:00:11.962 ******** 2026-04-11 04:28:23.162601 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:28:23.162613 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:28:23.162625 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:28:23.162639 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:28:23.162658 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:28:23.162677 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:28:23.162711 | orchestrator | 2026-04-11 04:28:23.162733 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-11 04:28:23.162752 | orchestrator | Saturday 11 April 2026 04:27:54 +0000 (0:00:02.847) 0:00:14.810 ******** 2026-04-11 04:28:23.162771 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:28:23.162789 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:28:23.162809 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:28:23.162828 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:28:23.162846 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:28:23.162857 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:28:23.162868 | orchestrator | 2026-04-11 04:28:23.162879 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-11 04:28:23.162890 | orchestrator | Saturday 11 April 2026 04:27:57 +0000 (0:00:02.304) 0:00:17.115 ******** 2026-04-11 04:28:23.162901 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:28:23.162911 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:28:23.162922 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:28:23.162933 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:28:23.162943 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:28:23.162954 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:28:23.162965 | orchestrator | 2026-04-11 04:28:23.162975 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-11 04:28:23.162986 | orchestrator | Saturday 11 April 2026 04:28:00 +0000 (0:00:03.324) 0:00:20.439 ******** 2026-04-11 04:28:23.162997 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.163008 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.163019 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.163030 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.163040 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.163051 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.163062 | orchestrator | 2026-04-11 04:28:23.163073 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-11 04:28:23.163084 | orchestrator | Saturday 11 April 2026 04:28:02 +0000 (0:00:02.018) 0:00:22.457 ******** 2026-04-11 04:28:23.163095 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.163105 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.163116 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.163127 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.163137 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.163148 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.163159 | orchestrator | 2026-04-11 04:28:23.163170 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-11 04:28:23.163181 | orchestrator | Saturday 11 April 2026 04:28:04 +0000 (0:00:02.200) 0:00:24.658 ******** 2026-04-11 04:28:23.163192 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 04:28:23.163202 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 04:28:23.163213 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.163248 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 04:28:23.163267 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 04:28:23.163279 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.163289 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 04:28:23.163300 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 04:28:23.163310 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.163322 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 04:28:23.163332 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 04:28:23.163343 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.163375 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 04:28:23.163386 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 04:28:23.163397 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.163408 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 04:28:23.163420 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 04:28:23.163438 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.163458 | orchestrator | 2026-04-11 04:28:23.163476 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-11 04:28:23.163492 | orchestrator | Saturday 11 April 2026 04:28:06 +0000 (0:00:02.067) 0:00:26.725 ******** 2026-04-11 04:28:23.163504 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.163514 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.163525 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.163536 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.163546 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.163557 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.163567 | orchestrator | 2026-04-11 04:28:23.163578 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-11 04:28:23.163590 | orchestrator | Saturday 11 April 2026 04:28:08 +0000 (0:00:02.276) 0:00:29.002 ******** 2026-04-11 04:28:23.163601 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:28:23.163612 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:28:23.163622 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:28:23.163633 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:28:23.163650 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:28:23.163668 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:28:23.163686 | orchestrator | 2026-04-11 04:28:23.163703 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-11 04:28:23.163720 | orchestrator | Saturday 11 April 2026 04:28:10 +0000 (0:00:02.063) 0:00:31.066 ******** 2026-04-11 04:28:23.163740 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:28:23.163760 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:28:23.163779 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:28:23.163797 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:28:23.163812 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:28:23.163823 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:28:23.163833 | orchestrator | 2026-04-11 04:28:23.163844 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-11 04:28:23.163855 | orchestrator | Saturday 11 April 2026 04:28:13 +0000 (0:00:02.734) 0:00:33.800 ******** 2026-04-11 04:28:23.163866 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.163877 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.163887 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.163902 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.163913 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.163984 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.163997 | orchestrator | 2026-04-11 04:28:23.164008 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-11 04:28:23.164030 | orchestrator | Saturday 11 April 2026 04:28:15 +0000 (0:00:02.284) 0:00:36.084 ******** 2026-04-11 04:28:23.164041 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.164051 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.164062 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.164072 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.164083 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.164102 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.164113 | orchestrator | 2026-04-11 04:28:23.164125 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-11 04:28:23.164137 | orchestrator | Saturday 11 April 2026 04:28:18 +0000 (0:00:02.471) 0:00:38.555 ******** 2026-04-11 04:28:23.164148 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.164159 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.164170 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.164180 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.164191 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.164201 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.164212 | orchestrator | 2026-04-11 04:28:23.164355 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-11 04:28:23.164377 | orchestrator | Saturday 11 April 2026 04:28:20 +0000 (0:00:02.182) 0:00:40.738 ******** 2026-04-11 04:28:23.164396 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-11 04:28:23.164414 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-11 04:28:23.164431 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.164447 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-11 04:28:23.164464 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-11 04:28:23.164482 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.164499 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-11 04:28:23.164516 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-11 04:28:23.164533 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:28:23.164551 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-11 04:28:23.164569 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-11 04:28:23.164588 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:28:23.164606 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-11 04:28:23.164622 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-11 04:28:23.164638 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:28:23.164656 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-11 04:28:23.164674 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-11 04:28:23.164692 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:28:23.164710 | orchestrator | 2026-04-11 04:28:23.164729 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-11 04:28:23.164749 | orchestrator | Saturday 11 April 2026 04:28:22 +0000 (0:00:01.911) 0:00:42.649 ******** 2026-04-11 04:28:23.164767 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:28:23.164796 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:28:23.164832 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:30:08.627831 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:30:08.627963 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.627992 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.628012 | orchestrator | 2026-04-11 04:30:08.628034 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-11 04:30:08.628123 | orchestrator | Saturday 11 April 2026 04:28:24 +0000 (0:00:02.345) 0:00:44.995 ******** 2026-04-11 04:30:08.628142 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:30:08.628153 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:30:08.628164 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:30:08.628203 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:30:08.628215 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.628226 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.628237 | orchestrator | 2026-04-11 04:30:08.628248 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-11 04:30:08.628259 | orchestrator | 2026-04-11 04:30:08.628270 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-11 04:30:08.628282 | orchestrator | Saturday 11 April 2026 04:28:27 +0000 (0:00:03.009) 0:00:48.004 ******** 2026-04-11 04:30:08.628293 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.628304 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.628315 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.628326 | orchestrator | 2026-04-11 04:30:08.628337 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-11 04:30:08.628348 | orchestrator | Saturday 11 April 2026 04:28:32 +0000 (0:00:04.728) 0:00:52.732 ******** 2026-04-11 04:30:08.628361 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.628374 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.628386 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.628399 | orchestrator | 2026-04-11 04:30:08.628411 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-11 04:30:08.628424 | orchestrator | Saturday 11 April 2026 04:28:35 +0000 (0:00:02.672) 0:00:55.405 ******** 2026-04-11 04:30:08.628436 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:30:08.628449 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:30:08.628463 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:30:08.628475 | orchestrator | 2026-04-11 04:30:08.628489 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-11 04:30:08.628502 | orchestrator | Saturday 11 April 2026 04:28:37 +0000 (0:00:02.290) 0:00:57.695 ******** 2026-04-11 04:30:08.628527 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.628540 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.628553 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.628565 | orchestrator | 2026-04-11 04:30:08.628579 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-11 04:30:08.628591 | orchestrator | Saturday 11 April 2026 04:28:39 +0000 (0:00:01.794) 0:00:59.490 ******** 2026-04-11 04:30:08.628604 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:30:08.628617 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.628630 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.628642 | orchestrator | 2026-04-11 04:30:08.628655 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-11 04:30:08.628668 | orchestrator | Saturday 11 April 2026 04:28:40 +0000 (0:00:01.433) 0:01:00.924 ******** 2026-04-11 04:30:08.628680 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.628693 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.628706 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.628718 | orchestrator | 2026-04-11 04:30:08.628731 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-11 04:30:08.628741 | orchestrator | Saturday 11 April 2026 04:28:42 +0000 (0:00:02.116) 0:01:03.040 ******** 2026-04-11 04:30:08.628752 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.628763 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.628773 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.628784 | orchestrator | 2026-04-11 04:30:08.628795 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-11 04:30:08.628806 | orchestrator | Saturday 11 April 2026 04:28:45 +0000 (0:00:02.524) 0:01:05.565 ******** 2026-04-11 04:30:08.628816 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:30:08.628827 | orchestrator | 2026-04-11 04:30:08.628838 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-11 04:30:08.628849 | orchestrator | Saturday 11 April 2026 04:28:47 +0000 (0:00:01.975) 0:01:07.540 ******** 2026-04-11 04:30:08.628869 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.628879 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.628890 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.628901 | orchestrator | 2026-04-11 04:30:08.628912 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-11 04:30:08.628923 | orchestrator | Saturday 11 April 2026 04:28:50 +0000 (0:00:02.898) 0:01:10.439 ******** 2026-04-11 04:30:08.628933 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.628944 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.628955 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.628965 | orchestrator | 2026-04-11 04:30:08.628976 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-11 04:30:08.628987 | orchestrator | Saturday 11 April 2026 04:28:51 +0000 (0:00:01.656) 0:01:12.096 ******** 2026-04-11 04:30:08.628997 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.629008 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.629019 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:30:08.629030 | orchestrator | 2026-04-11 04:30:08.629040 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-11 04:30:08.629051 | orchestrator | Saturday 11 April 2026 04:28:53 +0000 (0:00:01.938) 0:01:14.034 ******** 2026-04-11 04:30:08.629087 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.629099 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.629110 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:30:08.629121 | orchestrator | 2026-04-11 04:30:08.629132 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-11 04:30:08.629143 | orchestrator | Saturday 11 April 2026 04:28:56 +0000 (0:00:02.533) 0:01:16.568 ******** 2026-04-11 04:30:08.629154 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:30:08.629165 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.629195 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.629207 | orchestrator | 2026-04-11 04:30:08.629218 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-11 04:30:08.629228 | orchestrator | Saturday 11 April 2026 04:28:57 +0000 (0:00:01.443) 0:01:18.012 ******** 2026-04-11 04:30:08.629239 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:30:08.629250 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.629260 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.629271 | orchestrator | 2026-04-11 04:30:08.629282 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-11 04:30:08.629292 | orchestrator | Saturday 11 April 2026 04:28:59 +0000 (0:00:01.472) 0:01:19.485 ******** 2026-04-11 04:30:08.629303 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:30:08.629313 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:30:08.629324 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:30:08.629335 | orchestrator | 2026-04-11 04:30:08.629346 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-11 04:30:08.629373 | orchestrator | Saturday 11 April 2026 04:29:01 +0000 (0:00:02.211) 0:01:21.696 ******** 2026-04-11 04:30:08.629385 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.629395 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.629406 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.629417 | orchestrator | 2026-04-11 04:30:08.629428 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-11 04:30:08.629439 | orchestrator | Saturday 11 April 2026 04:29:03 +0000 (0:00:02.354) 0:01:24.050 ******** 2026-04-11 04:30:08.629450 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.629460 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.629471 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.629481 | orchestrator | 2026-04-11 04:30:08.629492 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-11 04:30:08.629504 | orchestrator | Saturday 11 April 2026 04:29:05 +0000 (0:00:01.527) 0:01:25.577 ******** 2026-04-11 04:30:08.629515 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 04:30:08.629535 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 04:30:08.629546 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-11 04:30:08.629557 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 04:30:08.629568 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 04:30:08.629579 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-11 04:30:08.629590 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.629600 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.629611 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.629622 | orchestrator | 2026-04-11 04:30:08.629633 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-11 04:30:08.629644 | orchestrator | Saturday 11 April 2026 04:29:28 +0000 (0:00:23.173) 0:01:48.751 ******** 2026-04-11 04:30:08.629655 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:30:08.629666 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:30:08.629676 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:30:08.629687 | orchestrator | 2026-04-11 04:30:08.629698 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-11 04:30:08.629709 | orchestrator | Saturday 11 April 2026 04:29:30 +0000 (0:00:01.506) 0:01:50.258 ******** 2026-04-11 04:30:08.629720 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:30:08.629730 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:30:08.629741 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:30:08.629752 | orchestrator | 2026-04-11 04:30:08.629762 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-11 04:30:08.629791 | orchestrator | Saturday 11 April 2026 04:29:32 +0000 (0:00:02.544) 0:01:52.802 ******** 2026-04-11 04:30:08.629802 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.629823 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.629834 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.629845 | orchestrator | 2026-04-11 04:30:08.629856 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-11 04:30:08.629867 | orchestrator | Saturday 11 April 2026 04:29:34 +0000 (0:00:02.304) 0:01:55.107 ******** 2026-04-11 04:30:08.629877 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:30:08.629888 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:30:08.629899 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:30:08.629910 | orchestrator | 2026-04-11 04:30:08.629921 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-11 04:30:08.629931 | orchestrator | Saturday 11 April 2026 04:30:04 +0000 (0:00:29.217) 0:02:24.325 ******** 2026-04-11 04:30:08.629942 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.629953 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.629963 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.629974 | orchestrator | 2026-04-11 04:30:08.629985 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-11 04:30:08.629996 | orchestrator | Saturday 11 April 2026 04:30:06 +0000 (0:00:01.878) 0:02:26.203 ******** 2026-04-11 04:30:08.630006 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:30:08.630170 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:30:08.630185 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:30:08.630195 | orchestrator | 2026-04-11 04:30:08.630206 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-11 04:30:08.630217 | orchestrator | Saturday 11 April 2026 04:30:07 +0000 (0:00:01.764) 0:02:27.967 ******** 2026-04-11 04:30:08.630228 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:30:08.630254 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:30:08.630266 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:30:08.630277 | orchestrator | 2026-04-11 04:30:08.630299 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-11 04:31:01.017478 | orchestrator | Saturday 11 April 2026 04:30:09 +0000 (0:00:01.819) 0:02:29.786 ******** 2026-04-11 04:31:01.017682 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:31:01.017721 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:31:01.017734 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:31:01.017746 | orchestrator | 2026-04-11 04:31:01.017758 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-11 04:31:01.017770 | orchestrator | Saturday 11 April 2026 04:30:11 +0000 (0:00:01.843) 0:02:31.630 ******** 2026-04-11 04:31:01.017783 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:31:01.017794 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:31:01.017806 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:31:01.017817 | orchestrator | 2026-04-11 04:31:01.017831 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-11 04:31:01.017848 | orchestrator | Saturday 11 April 2026 04:30:13 +0000 (0:00:01.692) 0:02:33.323 ******** 2026-04-11 04:31:01.017860 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:31:01.017892 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:31:01.017904 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:31:01.017916 | orchestrator | 2026-04-11 04:31:01.017928 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-11 04:31:01.017939 | orchestrator | Saturday 11 April 2026 04:30:15 +0000 (0:00:01.825) 0:02:35.149 ******** 2026-04-11 04:31:01.017951 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:31:01.017963 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:31:01.017974 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:31:01.017986 | orchestrator | 2026-04-11 04:31:01.018083 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-11 04:31:01.018098 | orchestrator | Saturday 11 April 2026 04:30:16 +0000 (0:00:01.787) 0:02:36.936 ******** 2026-04-11 04:31:01.018113 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:31:01.018128 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:31:01.018142 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:31:01.018157 | orchestrator | 2026-04-11 04:31:01.018173 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-11 04:31:01.018187 | orchestrator | Saturday 11 April 2026 04:30:18 +0000 (0:00:01.966) 0:02:38.903 ******** 2026-04-11 04:31:01.018201 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:31:01.018216 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:31:01.018229 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:31:01.018244 | orchestrator | 2026-04-11 04:31:01.018259 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-11 04:31:01.018275 | orchestrator | Saturday 11 April 2026 04:30:21 +0000 (0:00:02.257) 0:02:41.161 ******** 2026-04-11 04:31:01.018292 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:31:01.018308 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:31:01.018325 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:31:01.018342 | orchestrator | 2026-04-11 04:31:01.018359 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-11 04:31:01.018372 | orchestrator | Saturday 11 April 2026 04:30:22 +0000 (0:00:01.563) 0:02:42.725 ******** 2026-04-11 04:31:01.018390 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:31:01.018401 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:31:01.018410 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:31:01.018420 | orchestrator | 2026-04-11 04:31:01.018429 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-11 04:31:01.018439 | orchestrator | Saturday 11 April 2026 04:30:24 +0000 (0:00:01.605) 0:02:44.330 ******** 2026-04-11 04:31:01.018454 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:31:01.018471 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:31:01.018486 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:31:01.018534 | orchestrator | 2026-04-11 04:31:01.018552 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-11 04:31:01.018570 | orchestrator | Saturday 11 April 2026 04:30:26 +0000 (0:00:01.952) 0:02:46.283 ******** 2026-04-11 04:31:01.018583 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:31:01.018592 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:31:01.018602 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:31:01.018611 | orchestrator | 2026-04-11 04:31:01.018641 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-11 04:31:01.018653 | orchestrator | Saturday 11 April 2026 04:30:28 +0000 (0:00:01.920) 0:02:48.203 ******** 2026-04-11 04:31:01.018663 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 04:31:01.018673 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 04:31:01.018682 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-11 04:31:01.018694 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 04:31:01.018710 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 04:31:01.018727 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-11 04:31:01.018746 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 04:31:01.018769 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 04:31:01.018785 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-11 04:31:01.018801 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-11 04:31:01.018816 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 04:31:01.018852 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 04:31:01.018898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-11 04:31:01.018915 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 04:31:01.018929 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 04:31:01.018939 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-11 04:31:01.018949 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 04:31:01.018959 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 04:31:01.018968 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-11 04:31:01.018978 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-11 04:31:01.019021 | orchestrator | 2026-04-11 04:31:01.019034 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-11 04:31:01.019044 | orchestrator | 2026-04-11 04:31:01.019059 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-11 04:31:01.019075 | orchestrator | Saturday 11 April 2026 04:30:32 +0000 (0:00:04.488) 0:02:52.692 ******** 2026-04-11 04:31:01.019089 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:31:01.019103 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:31:01.019117 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:31:01.019132 | orchestrator | 2026-04-11 04:31:01.019147 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-11 04:31:01.019162 | orchestrator | Saturday 11 April 2026 04:30:34 +0000 (0:00:01.620) 0:02:54.312 ******** 2026-04-11 04:31:01.019192 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:31:01.019210 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:31:01.019226 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:31:01.019242 | orchestrator | 2026-04-11 04:31:01.019259 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-11 04:31:01.019270 | orchestrator | Saturday 11 April 2026 04:30:36 +0000 (0:00:01.824) 0:02:56.137 ******** 2026-04-11 04:31:01.019279 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:31:01.019289 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:31:01.019298 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:31:01.019308 | orchestrator | 2026-04-11 04:31:01.019317 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-11 04:31:01.019327 | orchestrator | Saturday 11 April 2026 04:30:37 +0000 (0:00:01.503) 0:02:57.640 ******** 2026-04-11 04:31:01.019337 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 04:31:01.019346 | orchestrator | 2026-04-11 04:31:01.019356 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-11 04:31:01.019365 | orchestrator | Saturday 11 April 2026 04:30:39 +0000 (0:00:02.153) 0:02:59.795 ******** 2026-04-11 04:31:01.019375 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:31:01.019385 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:31:01.019394 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:31:01.019404 | orchestrator | 2026-04-11 04:31:01.019413 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-11 04:31:01.019423 | orchestrator | Saturday 11 April 2026 04:30:41 +0000 (0:00:01.480) 0:03:01.275 ******** 2026-04-11 04:31:01.019433 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:31:01.019442 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:31:01.019452 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:31:01.019461 | orchestrator | 2026-04-11 04:31:01.019471 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-11 04:31:01.019480 | orchestrator | Saturday 11 April 2026 04:30:42 +0000 (0:00:01.408) 0:03:02.684 ******** 2026-04-11 04:31:01.019490 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:31:01.019500 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:31:01.019509 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:31:01.019519 | orchestrator | 2026-04-11 04:31:01.019528 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-11 04:31:01.019538 | orchestrator | Saturday 11 April 2026 04:30:44 +0000 (0:00:01.511) 0:03:04.195 ******** 2026-04-11 04:31:01.019547 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:31:01.019557 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:31:01.019566 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:31:01.019576 | orchestrator | 2026-04-11 04:31:01.019585 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-11 04:31:01.019595 | orchestrator | Saturday 11 April 2026 04:30:45 +0000 (0:00:01.804) 0:03:06.000 ******** 2026-04-11 04:31:01.019605 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:31:01.019616 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:31:01.019633 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:31:01.019650 | orchestrator | 2026-04-11 04:31:01.019667 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-11 04:31:01.019683 | orchestrator | Saturday 11 April 2026 04:30:48 +0000 (0:00:02.233) 0:03:08.233 ******** 2026-04-11 04:31:01.019693 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:31:01.019702 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:31:01.019712 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:31:01.019721 | orchestrator | 2026-04-11 04:31:01.019730 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-11 04:31:01.019740 | orchestrator | Saturday 11 April 2026 04:30:50 +0000 (0:00:02.413) 0:03:10.647 ******** 2026-04-11 04:31:01.019750 | orchestrator | changed: [testbed-node-3] 2026-04-11 04:31:01.019759 | orchestrator | changed: [testbed-node-4] 2026-04-11 04:31:01.019776 | orchestrator | changed: [testbed-node-5] 2026-04-11 04:31:01.019786 | orchestrator | 2026-04-11 04:31:01.019795 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-11 04:31:01.019805 | orchestrator | 2026-04-11 04:31:01.019814 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-11 04:31:01.019831 | orchestrator | Saturday 11 April 2026 04:30:58 +0000 (0:00:08.196) 0:03:18.844 ******** 2026-04-11 04:31:01.019841 | orchestrator | ok: [testbed-manager] 2026-04-11 04:31:01.019853 | orchestrator | 2026-04-11 04:31:01.019870 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-11 04:31:01.019898 | orchestrator | Saturday 11 April 2026 04:31:00 +0000 (0:00:02.262) 0:03:21.106 ******** 2026-04-11 04:32:14.105544 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.105640 | orchestrator | 2026-04-11 04:32:14.105649 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-11 04:32:14.105655 | orchestrator | Saturday 11 April 2026 04:31:02 +0000 (0:00:01.481) 0:03:22.587 ******** 2026-04-11 04:32:14.105659 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-11 04:32:14.105664 | orchestrator | 2026-04-11 04:32:14.105668 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-11 04:32:14.105672 | orchestrator | Saturday 11 April 2026 04:31:04 +0000 (0:00:01.675) 0:03:24.262 ******** 2026-04-11 04:32:14.105676 | orchestrator | changed: [testbed-manager] 2026-04-11 04:32:14.105681 | orchestrator | 2026-04-11 04:32:14.105685 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-11 04:32:14.105689 | orchestrator | Saturday 11 April 2026 04:31:06 +0000 (0:00:02.058) 0:03:26.320 ******** 2026-04-11 04:32:14.105693 | orchestrator | changed: [testbed-manager] 2026-04-11 04:32:14.105697 | orchestrator | 2026-04-11 04:32:14.105700 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-11 04:32:14.105704 | orchestrator | Saturday 11 April 2026 04:31:08 +0000 (0:00:02.009) 0:03:28.330 ******** 2026-04-11 04:32:14.105708 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 04:32:14.105712 | orchestrator | 2026-04-11 04:32:14.105716 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-11 04:32:14.105720 | orchestrator | Saturday 11 April 2026 04:31:11 +0000 (0:00:03.292) 0:03:31.623 ******** 2026-04-11 04:32:14.105724 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-11 04:32:14.105728 | orchestrator | 2026-04-11 04:32:14.105731 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-11 04:32:14.105735 | orchestrator | Saturday 11 April 2026 04:31:13 +0000 (0:00:02.013) 0:03:33.637 ******** 2026-04-11 04:32:14.105739 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.105743 | orchestrator | 2026-04-11 04:32:14.105747 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-11 04:32:14.105751 | orchestrator | Saturday 11 April 2026 04:31:15 +0000 (0:00:01.501) 0:03:35.139 ******** 2026-04-11 04:32:14.105754 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.105758 | orchestrator | 2026-04-11 04:32:14.105762 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-11 04:32:14.105766 | orchestrator | 2026-04-11 04:32:14.105770 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-11 04:32:14.105773 | orchestrator | Saturday 11 April 2026 04:31:16 +0000 (0:00:01.797) 0:03:36.936 ******** 2026-04-11 04:32:14.105777 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.105781 | orchestrator | 2026-04-11 04:32:14.105785 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-11 04:32:14.105788 | orchestrator | Saturday 11 April 2026 04:31:18 +0000 (0:00:01.198) 0:03:38.135 ******** 2026-04-11 04:32:14.105792 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 04:32:14.105797 | orchestrator | 2026-04-11 04:32:14.105801 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-11 04:32:14.105807 | orchestrator | Saturday 11 April 2026 04:31:19 +0000 (0:00:01.619) 0:03:39.755 ******** 2026-04-11 04:32:14.105835 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.105842 | orchestrator | 2026-04-11 04:32:14.105848 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-11 04:32:14.105854 | orchestrator | Saturday 11 April 2026 04:31:21 +0000 (0:00:01.909) 0:03:41.664 ******** 2026-04-11 04:32:14.105860 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.105865 | orchestrator | 2026-04-11 04:32:14.105871 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-11 04:32:14.105877 | orchestrator | Saturday 11 April 2026 04:31:24 +0000 (0:00:02.815) 0:03:44.480 ******** 2026-04-11 04:32:14.105883 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.105889 | orchestrator | 2026-04-11 04:32:14.105895 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-11 04:32:14.105901 | orchestrator | Saturday 11 April 2026 04:31:25 +0000 (0:00:01.484) 0:03:45.964 ******** 2026-04-11 04:32:14.106062 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.106075 | orchestrator | 2026-04-11 04:32:14.106080 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-11 04:32:14.106084 | orchestrator | Saturday 11 April 2026 04:31:27 +0000 (0:00:01.522) 0:03:47.487 ******** 2026-04-11 04:32:14.106088 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.106092 | orchestrator | 2026-04-11 04:32:14.106096 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-11 04:32:14.106099 | orchestrator | Saturday 11 April 2026 04:31:29 +0000 (0:00:01.734) 0:03:49.221 ******** 2026-04-11 04:32:14.106104 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.106107 | orchestrator | 2026-04-11 04:32:14.106111 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-11 04:32:14.106115 | orchestrator | Saturday 11 April 2026 04:31:31 +0000 (0:00:02.655) 0:03:51.876 ******** 2026-04-11 04:32:14.106119 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:14.106122 | orchestrator | 2026-04-11 04:32:14.106126 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-11 04:32:14.106130 | orchestrator | 2026-04-11 04:32:14.106134 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-11 04:32:14.106138 | orchestrator | Saturday 11 April 2026 04:31:33 +0000 (0:00:02.001) 0:03:53.878 ******** 2026-04-11 04:32:14.106141 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:32:14.106145 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:32:14.106149 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:32:14.106153 | orchestrator | 2026-04-11 04:32:14.106156 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-11 04:32:14.106161 | orchestrator | Saturday 11 April 2026 04:31:35 +0000 (0:00:01.470) 0:03:55.349 ******** 2026-04-11 04:32:14.106164 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:32:14.106168 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:32:14.106172 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:32:14.106176 | orchestrator | 2026-04-11 04:32:14.106206 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-11 04:32:14.106210 | orchestrator | Saturday 11 April 2026 04:31:36 +0000 (0:00:01.414) 0:03:56.764 ******** 2026-04-11 04:32:14.106214 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:32:14.106218 | orchestrator | 2026-04-11 04:32:14.106222 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-11 04:32:14.106226 | orchestrator | Saturday 11 April 2026 04:31:38 +0000 (0:00:02.022) 0:03:58.786 ******** 2026-04-11 04:32:14.106230 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106234 | orchestrator | 2026-04-11 04:32:14.106237 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-11 04:32:14.106241 | orchestrator | Saturday 11 April 2026 04:31:40 +0000 (0:00:02.037) 0:04:00.824 ******** 2026-04-11 04:32:14.106245 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106257 | orchestrator | 2026-04-11 04:32:14.106260 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-11 04:32:14.106264 | orchestrator | Saturday 11 April 2026 04:31:42 +0000 (0:00:01.971) 0:04:02.796 ******** 2026-04-11 04:32:14.106268 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:32:14.106271 | orchestrator | 2026-04-11 04:32:14.106275 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-11 04:32:14.106279 | orchestrator | Saturday 11 April 2026 04:31:43 +0000 (0:00:01.179) 0:04:03.976 ******** 2026-04-11 04:32:14.106282 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106286 | orchestrator | 2026-04-11 04:32:14.106290 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-11 04:32:14.106293 | orchestrator | Saturday 11 April 2026 04:31:46 +0000 (0:00:02.215) 0:04:06.192 ******** 2026-04-11 04:32:14.106302 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106306 | orchestrator | 2026-04-11 04:32:14.106310 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-11 04:32:14.106313 | orchestrator | Saturday 11 April 2026 04:31:48 +0000 (0:00:02.386) 0:04:08.579 ******** 2026-04-11 04:32:14.106317 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106321 | orchestrator | 2026-04-11 04:32:14.106324 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-11 04:32:14.106328 | orchestrator | Saturday 11 April 2026 04:31:49 +0000 (0:00:01.265) 0:04:09.845 ******** 2026-04-11 04:32:14.106332 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106335 | orchestrator | 2026-04-11 04:32:14.106339 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-11 04:32:14.106343 | orchestrator | Saturday 11 April 2026 04:31:50 +0000 (0:00:01.214) 0:04:11.059 ******** 2026-04-11 04:32:14.106346 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-04-11 04:32:14.106350 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-04-11 04:32:14.106355 | orchestrator | } 2026-04-11 04:32:14.106359 | orchestrator | 2026-04-11 04:32:14.106363 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-11 04:32:14.106367 | orchestrator | Saturday 11 April 2026 04:31:52 +0000 (0:00:01.182) 0:04:12.242 ******** 2026-04-11 04:32:14.106370 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:32:14.106374 | orchestrator | 2026-04-11 04:32:14.106378 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-11 04:32:14.106382 | orchestrator | Saturday 11 April 2026 04:31:53 +0000 (0:00:01.222) 0:04:13.464 ******** 2026-04-11 04:32:14.106385 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-11 04:32:14.106389 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-11 04:32:14.106427 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-11 04:32:14.106432 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-11 04:32:14.106435 | orchestrator | 2026-04-11 04:32:14.106439 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-11 04:32:14.106443 | orchestrator | Saturday 11 April 2026 04:31:59 +0000 (0:00:06.488) 0:04:19.953 ******** 2026-04-11 04:32:14.106446 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106450 | orchestrator | 2026-04-11 04:32:14.106454 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-11 04:32:14.106458 | orchestrator | Saturday 11 April 2026 04:32:02 +0000 (0:00:02.483) 0:04:22.437 ******** 2026-04-11 04:32:14.106461 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106465 | orchestrator | 2026-04-11 04:32:14.106469 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-11 04:32:14.106472 | orchestrator | Saturday 11 April 2026 04:32:05 +0000 (0:00:02.832) 0:04:25.269 ******** 2026-04-11 04:32:14.106476 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-11 04:32:14.106483 | orchestrator | 2026-04-11 04:32:14.106487 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-11 04:32:14.106491 | orchestrator | Saturday 11 April 2026 04:32:09 +0000 (0:00:04.277) 0:04:29.547 ******** 2026-04-11 04:32:14.106495 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:32:14.106498 | orchestrator | 2026-04-11 04:32:14.106502 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-11 04:32:14.106506 | orchestrator | Saturday 11 April 2026 04:32:10 +0000 (0:00:01.208) 0:04:30.755 ******** 2026-04-11 04:32:14.106510 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-11 04:32:14.106517 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-11 04:32:14.106520 | orchestrator | 2026-04-11 04:32:14.106524 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-11 04:32:14.106528 | orchestrator | Saturday 11 April 2026 04:32:13 +0000 (0:00:03.179) 0:04:33.935 ******** 2026-04-11 04:32:14.106532 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:32:14.106539 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:32:44.082194 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:32:44.082288 | orchestrator | 2026-04-11 04:32:44.082301 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-11 04:32:44.082311 | orchestrator | Saturday 11 April 2026 04:32:15 +0000 (0:00:01.558) 0:04:35.493 ******** 2026-04-11 04:32:44.082319 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:32:44.082328 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:32:44.082336 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:32:44.082344 | orchestrator | 2026-04-11 04:32:44.082352 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-11 04:32:44.082360 | orchestrator | 2026-04-11 04:32:44.082368 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-11 04:32:44.082376 | orchestrator | Saturday 11 April 2026 04:32:17 +0000 (0:00:02.472) 0:04:37.965 ******** 2026-04-11 04:32:44.082384 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:44.082392 | orchestrator | 2026-04-11 04:32:44.082400 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-11 04:32:44.082408 | orchestrator | Saturday 11 April 2026 04:32:19 +0000 (0:00:01.162) 0:04:39.128 ******** 2026-04-11 04:32:44.082416 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-11 04:32:44.082425 | orchestrator | 2026-04-11 04:32:44.082433 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-11 04:32:44.082440 | orchestrator | Saturday 11 April 2026 04:32:20 +0000 (0:00:01.618) 0:04:40.746 ******** 2026-04-11 04:32:44.082448 | orchestrator | ok: [testbed-manager] 2026-04-11 04:32:44.082456 | orchestrator | 2026-04-11 04:32:44.082464 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-11 04:32:44.082472 | orchestrator | 2026-04-11 04:32:44.082480 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-11 04:32:44.082488 | orchestrator | Saturday 11 April 2026 04:32:26 +0000 (0:00:05.636) 0:04:46.383 ******** 2026-04-11 04:32:44.082495 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:32:44.082503 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:32:44.082511 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:32:44.082519 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:32:44.082527 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:32:44.082534 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:32:44.082542 | orchestrator | 2026-04-11 04:32:44.082550 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-11 04:32:44.082558 | orchestrator | Saturday 11 April 2026 04:32:28 +0000 (0:00:01.831) 0:04:48.214 ******** 2026-04-11 04:32:44.082566 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 04:32:44.082574 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 04:32:44.082599 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-11 04:32:44.082608 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 04:32:44.082616 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 04:32:44.082623 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 04:32:44.082631 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-11 04:32:44.082639 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 04:32:44.082647 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-11 04:32:44.082654 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 04:32:44.082662 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 04:32:44.082670 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 04:32:44.082678 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 04:32:44.082685 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-11 04:32:44.082693 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-11 04:32:44.082701 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 04:32:44.082709 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 04:32:44.082716 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-11 04:32:44.082725 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 04:32:44.082732 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 04:32:44.082740 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-11 04:32:44.082748 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 04:32:44.082756 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 04:32:44.082763 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-11 04:32:44.082771 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 04:32:44.082779 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 04:32:44.082800 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-11 04:32:44.082808 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 04:32:44.082816 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 04:32:44.082824 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-11 04:32:44.082832 | orchestrator | 2026-04-11 04:32:44.082840 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-11 04:32:44.082848 | orchestrator | Saturday 11 April 2026 04:32:39 +0000 (0:00:11.362) 0:04:59.577 ******** 2026-04-11 04:32:44.082855 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:32:44.082863 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:32:44.082871 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:32:44.082901 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:32:44.082909 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:32:44.082916 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:32:44.082924 | orchestrator | 2026-04-11 04:32:44.082932 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-11 04:32:44.082946 | orchestrator | Saturday 11 April 2026 04:32:41 +0000 (0:00:01.786) 0:05:01.364 ******** 2026-04-11 04:32:44.082954 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:32:44.082962 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:32:44.082970 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:32:44.082977 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:32:44.082985 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:32:44.082993 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:32:44.083001 | orchestrator | 2026-04-11 04:32:44.083008 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:32:44.083026 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 04:32:44.083056 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 04:32:44.083065 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 04:32:44.083073 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 04:32:44.083081 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 04:32:44.083088 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 04:32:44.083096 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 04:32:44.083104 | orchestrator | 2026-04-11 04:32:44.083112 | orchestrator | 2026-04-11 04:32:44.083120 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:32:44.083128 | orchestrator | Saturday 11 April 2026 04:32:44 +0000 (0:00:02.804) 0:05:04.169 ******** 2026-04-11 04:32:44.083136 | orchestrator | =============================================================================== 2026-04-11 04:32:44.083144 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 29.22s 2026-04-11 04:32:44.083152 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.17s 2026-04-11 04:32:44.083160 | orchestrator | Manage labels ---------------------------------------------------------- 11.36s 2026-04-11 04:32:44.083168 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.20s 2026-04-11 04:32:44.083175 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 6.49s 2026-04-11 04:32:44.083183 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.64s 2026-04-11 04:32:44.083191 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 5.36s 2026-04-11 04:32:44.083199 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 4.73s 2026-04-11 04:32:44.083207 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.49s 2026-04-11 04:32:44.083215 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.28s 2026-04-11 04:32:44.083223 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 3.32s 2026-04-11 04:32:44.083231 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.29s 2026-04-11 04:32:44.083239 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.18s 2026-04-11 04:32:44.083247 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.01s 2026-04-11 04:32:44.083255 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.90s 2026-04-11 04:32:44.083272 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.85s 2026-04-11 04:32:44.083280 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.83s 2026-04-11 04:32:44.083288 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.82s 2026-04-11 04:32:44.083302 | orchestrator | Manage taints ----------------------------------------------------------- 2.80s 2026-04-11 04:32:44.463455 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.73s 2026-04-11 04:32:44.690755 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-11 04:32:44.690846 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-04-11 04:32:44.701455 | orchestrator | + set -e 2026-04-11 04:32:44.701547 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 04:32:44.701556 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 04:32:44.701563 | orchestrator | ++ INTERACTIVE=false 2026-04-11 04:32:44.701568 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 04:32:44.701574 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 04:32:44.701579 | orchestrator | + osism apply openstackclient 2026-04-11 04:32:56.290137 | orchestrator | 2026-04-11 04:32:56 | INFO  | Prepare task for execution of openstackclient. 2026-04-11 04:32:56.364751 | orchestrator | 2026-04-11 04:32:56 | INFO  | Task 6a2d0a84-e352-4ae9-9b97-62c37313bc02 (openstackclient) was prepared for execution. 2026-04-11 04:32:56.365335 | orchestrator | 2026-04-11 04:32:56 | INFO  | It takes a moment until task 6a2d0a84-e352-4ae9-9b97-62c37313bc02 (openstackclient) has been started and output is visible here. 2026-04-11 04:33:33.431384 | orchestrator | 2026-04-11 04:33:33.431534 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-11 04:33:33.431560 | orchestrator | 2026-04-11 04:33:33.431570 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-11 04:33:33.431579 | orchestrator | Saturday 11 April 2026 04:33:02 +0000 (0:00:02.027) 0:00:02.027 ******** 2026-04-11 04:33:33.431598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-11 04:33:33.431617 | orchestrator | 2026-04-11 04:33:33.431626 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-11 04:33:33.431635 | orchestrator | Saturday 11 April 2026 04:33:04 +0000 (0:00:01.964) 0:00:03.992 ******** 2026-04-11 04:33:33.431643 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-11 04:33:33.431653 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-11 04:33:33.431661 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-11 04:33:33.431670 | orchestrator | 2026-04-11 04:33:33.431678 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-11 04:33:33.431686 | orchestrator | Saturday 11 April 2026 04:33:06 +0000 (0:00:02.769) 0:00:06.762 ******** 2026-04-11 04:33:33.431694 | orchestrator | changed: [testbed-manager] 2026-04-11 04:33:33.431702 | orchestrator | 2026-04-11 04:33:33.431713 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-11 04:33:33.431725 | orchestrator | Saturday 11 April 2026 04:33:09 +0000 (0:00:02.406) 0:00:09.169 ******** 2026-04-11 04:33:33.431734 | orchestrator | ok: [testbed-manager] 2026-04-11 04:33:33.431743 | orchestrator | 2026-04-11 04:33:33.431751 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-11 04:33:33.431759 | orchestrator | Saturday 11 April 2026 04:33:11 +0000 (0:00:02.122) 0:00:11.291 ******** 2026-04-11 04:33:33.431767 | orchestrator | ok: [testbed-manager] 2026-04-11 04:33:33.431775 | orchestrator | 2026-04-11 04:33:33.431783 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-11 04:33:33.431790 | orchestrator | Saturday 11 April 2026 04:33:13 +0000 (0:00:01.982) 0:00:13.274 ******** 2026-04-11 04:33:33.431798 | orchestrator | ok: [testbed-manager] 2026-04-11 04:33:33.431851 | orchestrator | 2026-04-11 04:33:33.431861 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-11 04:33:33.431869 | orchestrator | Saturday 11 April 2026 04:33:15 +0000 (0:00:01.652) 0:00:14.926 ******** 2026-04-11 04:33:33.431877 | orchestrator | changed: [testbed-manager] 2026-04-11 04:33:33.431885 | orchestrator | 2026-04-11 04:33:33.431892 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-11 04:33:33.431900 | orchestrator | Saturday 11 April 2026 04:33:27 +0000 (0:00:12.525) 0:00:27.452 ******** 2026-04-11 04:33:33.431909 | orchestrator | changed: [testbed-manager] 2026-04-11 04:33:33.431918 | orchestrator | 2026-04-11 04:33:33.431927 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-11 04:33:33.431936 | orchestrator | Saturday 11 April 2026 04:33:29 +0000 (0:00:01.755) 0:00:29.207 ******** 2026-04-11 04:33:33.431945 | orchestrator | changed: [testbed-manager] 2026-04-11 04:33:33.431954 | orchestrator | 2026-04-11 04:33:33.431964 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-11 04:33:33.431973 | orchestrator | Saturday 11 April 2026 04:33:31 +0000 (0:00:01.714) 0:00:30.922 ******** 2026-04-11 04:33:33.431981 | orchestrator | ok: [testbed-manager] 2026-04-11 04:33:33.431990 | orchestrator | 2026-04-11 04:33:33.431999 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:33:33.432009 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 04:33:33.432019 | orchestrator | 2026-04-11 04:33:33.432028 | orchestrator | 2026-04-11 04:33:33.432037 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:33:33.432046 | orchestrator | Saturday 11 April 2026 04:33:33 +0000 (0:00:01.974) 0:00:32.897 ******** 2026-04-11 04:33:33.432056 | orchestrator | =============================================================================== 2026-04-11 04:33:33.432065 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 12.53s 2026-04-11 04:33:33.432074 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.77s 2026-04-11 04:33:33.432083 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.41s 2026-04-11 04:33:33.432092 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.12s 2026-04-11 04:33:33.432101 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.98s 2026-04-11 04:33:33.432110 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.98s 2026-04-11 04:33:33.432120 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.96s 2026-04-11 04:33:33.432128 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.75s 2026-04-11 04:33:33.432138 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.71s 2026-04-11 04:33:33.432147 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.65s 2026-04-11 04:33:33.649609 | orchestrator | + osism apply -a upgrade common 2026-04-11 04:33:35.066005 | orchestrator | 2026-04-11 04:33:35 | INFO  | Prepare task for execution of common. 2026-04-11 04:33:35.143979 | orchestrator | 2026-04-11 04:33:35 | INFO  | Task 1d960329-275e-4d29-957f-7dbb374eec7b (common) was prepared for execution. 2026-04-11 04:33:35.144051 | orchestrator | 2026-04-11 04:33:35 | INFO  | It takes a moment until task 1d960329-275e-4d29-957f-7dbb374eec7b (common) has been started and output is visible here. 2026-04-11 04:33:55.582470 | orchestrator | 2026-04-11 04:33:55.582565 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-11 04:33:55.582585 | orchestrator | 2026-04-11 04:33:55.582594 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-11 04:33:55.582602 | orchestrator | Saturday 11 April 2026 04:33:41 +0000 (0:00:02.251) 0:00:02.251 ******** 2026-04-11 04:33:55.582609 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 04:33:55.582648 | orchestrator | 2026-04-11 04:33:55.582656 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-11 04:33:55.582663 | orchestrator | Saturday 11 April 2026 04:33:44 +0000 (0:00:03.718) 0:00:05.969 ******** 2026-04-11 04:33:55.582671 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 04:33:55.582678 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 04:33:55.582685 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 04:33:55.582692 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 04:33:55.582700 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 04:33:55.582709 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 04:33:55.582716 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 04:33:55.582724 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 04:33:55.582731 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 04:33:55.582738 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 04:33:55.582745 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 04:33:55.582753 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 04:33:55.582760 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 04:33:55.582768 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 04:33:55.582774 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 04:33:55.582781 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 04:33:55.582788 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-11 04:33:55.582797 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 04:33:55.582804 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-11 04:33:55.582882 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 04:33:55.582891 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-11 04:33:55.582898 | orchestrator | 2026-04-11 04:33:55.582905 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-11 04:33:55.582912 | orchestrator | Saturday 11 April 2026 04:33:50 +0000 (0:00:05.573) 0:00:11.543 ******** 2026-04-11 04:33:55.582920 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 04:33:55.582930 | orchestrator | 2026-04-11 04:33:55.582937 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-11 04:33:55.582961 | orchestrator | Saturday 11 April 2026 04:33:53 +0000 (0:00:02.834) 0:00:14.377 ******** 2026-04-11 04:33:55.582978 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:33:55.582988 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:33:55.583025 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:33:55.583034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:33:55.583041 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:33:55.583049 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:33:55.583056 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:33:55.583067 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:33:55.583078 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:33:55.583091 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.711878 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712066 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712086 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712098 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712110 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712123 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:00.712164 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:00.712176 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712219 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712233 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712244 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:00.712256 | orchestrator | 2026-04-11 04:34:00.712269 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-11 04:34:00.712281 | orchestrator | Saturday 11 April 2026 04:34:00 +0000 (0:00:06.952) 0:00:21.330 ******** 2026-04-11 04:34:00.712563 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:00.712591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:00.712633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:00.712656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:00.712722 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798617 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:34:01.798648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798661 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798697 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:34:01.798709 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:34:01.798721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:01.798734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:01.798746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:01.798892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798904 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:34:01.798916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798935 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:34:01.798947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:01.798970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:01.798984 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:34:01.799005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397241 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:34:04.397270 | orchestrator | 2026-04-11 04:34:04.397292 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-11 04:34:04.397306 | orchestrator | Saturday 11 April 2026 04:34:03 +0000 (0:00:02.836) 0:00:24.167 ******** 2026-04-11 04:34:04.397318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:04.397357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:04.397369 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:04.397453 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397464 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:34:04.397506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:04.397566 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:34:04.397576 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:34:04.397588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:04.397613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:04.397625 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:34:04.397645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:18.088294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:34:18.088411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:18.088420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:18.088428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:18.088440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:18.088449 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:34:18.088456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:18.088462 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:34:18.088468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:18.088474 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:34:18.088480 | orchestrator | 2026-04-11 04:34:18.088488 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-11 04:34:18.088495 | orchestrator | Saturday 11 April 2026 04:34:06 +0000 (0:00:03.401) 0:00:27.568 ******** 2026-04-11 04:34:18.088522 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:34:18.088529 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:34:18.088535 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:34:18.088541 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:34:18.088547 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:34:18.088551 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:34:18.088554 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:34:18.088558 | orchestrator | 2026-04-11 04:34:18.088562 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-11 04:34:18.088566 | orchestrator | Saturday 11 April 2026 04:34:08 +0000 (0:00:02.124) 0:00:29.693 ******** 2026-04-11 04:34:18.088570 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:34:18.088573 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:34:18.088577 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:34:18.088581 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:34:18.088584 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:34:18.088588 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:34:18.088592 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:34:18.088595 | orchestrator | 2026-04-11 04:34:18.088599 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-11 04:34:18.088603 | orchestrator | Saturday 11 April 2026 04:34:10 +0000 (0:00:02.239) 0:00:31.933 ******** 2026-04-11 04:34:18.088607 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:34:18.088610 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:34:18.088614 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:34:18.088618 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:34:18.088626 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:34:18.088630 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:34:18.088634 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:34:18.088637 | orchestrator | 2026-04-11 04:34:18.088641 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-11 04:34:18.088645 | orchestrator | Saturday 11 April 2026 04:34:13 +0000 (0:00:02.332) 0:00:34.265 ******** 2026-04-11 04:34:18.088648 | orchestrator | changed: [testbed-manager] 2026-04-11 04:34:18.088652 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:34:18.088656 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:34:18.088660 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:34:18.088663 | orchestrator | changed: [testbed-node-3] 2026-04-11 04:34:18.088667 | orchestrator | changed: [testbed-node-4] 2026-04-11 04:34:18.088671 | orchestrator | changed: [testbed-node-5] 2026-04-11 04:34:18.088674 | orchestrator | 2026-04-11 04:34:18.088678 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-11 04:34:18.088683 | orchestrator | Saturday 11 April 2026 04:34:16 +0000 (0:00:03.370) 0:00:37.635 ******** 2026-04-11 04:34:18.088687 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:18.088692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:18.088696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:18.088704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:18.088715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:22.532293 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532537 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:22.532685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:22.532704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:22.532744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:45.669640 | orchestrator | 2026-04-11 04:34:45.669811 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-11 04:34:45.669844 | orchestrator | Saturday 11 April 2026 04:34:23 +0000 (0:00:07.222) 0:00:44.858 ******** 2026-04-11 04:34:45.669856 | orchestrator | [WARNING]: Skipped 2026-04-11 04:34:45.669885 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-11 04:34:45.669898 | orchestrator | to this access issue: 2026-04-11 04:34:45.669909 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-11 04:34:45.669920 | orchestrator | directory 2026-04-11 04:34:45.669931 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 04:34:45.669943 | orchestrator | 2026-04-11 04:34:45.669955 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-11 04:34:45.669966 | orchestrator | Saturday 11 April 2026 04:34:26 +0000 (0:00:02.618) 0:00:47.477 ******** 2026-04-11 04:34:45.669977 | orchestrator | [WARNING]: Skipped 2026-04-11 04:34:45.669988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-11 04:34:45.669999 | orchestrator | to this access issue: 2026-04-11 04:34:45.670009 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-11 04:34:45.670080 | orchestrator | directory 2026-04-11 04:34:45.670092 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 04:34:45.670103 | orchestrator | 2026-04-11 04:34:45.670114 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-11 04:34:45.670147 | orchestrator | Saturday 11 April 2026 04:34:28 +0000 (0:00:01.994) 0:00:49.472 ******** 2026-04-11 04:34:45.670161 | orchestrator | [WARNING]: Skipped 2026-04-11 04:34:45.670174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-11 04:34:45.670188 | orchestrator | to this access issue: 2026-04-11 04:34:45.670200 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-11 04:34:45.670213 | orchestrator | directory 2026-04-11 04:34:45.670225 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 04:34:45.670237 | orchestrator | 2026-04-11 04:34:45.670250 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-11 04:34:45.670262 | orchestrator | Saturday 11 April 2026 04:34:30 +0000 (0:00:02.290) 0:00:51.762 ******** 2026-04-11 04:34:45.670275 | orchestrator | [WARNING]: Skipped 2026-04-11 04:34:45.670288 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-11 04:34:45.670300 | orchestrator | to this access issue: 2026-04-11 04:34:45.670312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-11 04:34:45.670325 | orchestrator | directory 2026-04-11 04:34:45.670337 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 04:34:45.670349 | orchestrator | 2026-04-11 04:34:45.670362 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-11 04:34:45.670374 | orchestrator | Saturday 11 April 2026 04:34:32 +0000 (0:00:02.062) 0:00:53.825 ******** 2026-04-11 04:34:45.670387 | orchestrator | changed: [testbed-manager] 2026-04-11 04:34:45.670399 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:34:45.670412 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:34:45.670424 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:34:45.670436 | orchestrator | changed: [testbed-node-3] 2026-04-11 04:34:45.670448 | orchestrator | changed: [testbed-node-4] 2026-04-11 04:34:45.670460 | orchestrator | changed: [testbed-node-5] 2026-04-11 04:34:45.670472 | orchestrator | 2026-04-11 04:34:45.670484 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-11 04:34:45.670497 | orchestrator | Saturday 11 April 2026 04:34:37 +0000 (0:00:04.853) 0:00:58.679 ******** 2026-04-11 04:34:45.670510 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 04:34:45.670523 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 04:34:45.670534 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 04:34:45.670544 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 04:34:45.670555 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 04:34:45.670566 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 04:34:45.670576 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-11 04:34:45.670587 | orchestrator | 2026-04-11 04:34:45.670598 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-11 04:34:45.670608 | orchestrator | Saturday 11 April 2026 04:34:41 +0000 (0:00:04.027) 0:01:02.706 ******** 2026-04-11 04:34:45.670619 | orchestrator | ok: [testbed-manager] 2026-04-11 04:34:45.670630 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:34:45.670641 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:34:45.670652 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:34:45.670662 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:34:45.670673 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:34:45.670684 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:34:45.670694 | orchestrator | 2026-04-11 04:34:45.670705 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-11 04:34:45.670724 | orchestrator | Saturday 11 April 2026 04:34:44 +0000 (0:00:03.230) 0:01:05.936 ******** 2026-04-11 04:34:45.670757 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:45.670847 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:45.670861 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:45.670873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:45.670885 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:45.670896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:45.670908 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:45.670937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:54.112009 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:54.112143 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:54.112193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:54.112213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:54.112233 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:54.112251 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:54.112268 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:54.112319 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:54.112374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:54.112397 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:54.112417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:34:54.112437 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:54.112449 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:54.112462 | orchestrator | 2026-04-11 04:34:54.112475 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-11 04:34:54.112487 | orchestrator | Saturday 11 April 2026 04:34:48 +0000 (0:00:03.298) 0:01:09.235 ******** 2026-04-11 04:34:54.112499 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 04:34:54.112523 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 04:34:54.112536 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 04:34:54.112548 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 04:34:54.112560 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 04:34:54.112572 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 04:34:54.112585 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-11 04:34:54.112597 | orchestrator | 2026-04-11 04:34:54.112609 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-11 04:34:54.112621 | orchestrator | Saturday 11 April 2026 04:34:51 +0000 (0:00:03.610) 0:01:12.845 ******** 2026-04-11 04:34:54.112634 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 04:34:54.112647 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 04:34:54.112659 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 04:34:54.112671 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 04:34:54.112684 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 04:34:54.112696 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 04:34:54.112709 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-11 04:34:54.112721 | orchestrator | 2026-04-11 04:34:54.112747 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-11 04:34:58.837743 | orchestrator | Saturday 11 April 2026 04:34:55 +0000 (0:00:03.749) 0:01:16.594 ******** 2026-04-11 04:34:58.837884 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:58.837909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:58.837922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:58.837934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:58.837971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:34:58.837984 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838126 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:34:58.838208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:35:03.964104 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-11 04:35:03.964242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:35:03.964288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:35:03.964303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:35:03.964318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 04:35:03.964331 | orchestrator | 2026-04-11 04:35:03.964344 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-11 04:35:03.964356 | orchestrator | Saturday 11 April 2026 04:35:01 +0000 (0:00:05.914) 0:01:22.509 ******** 2026-04-11 04:35:03.964369 | orchestrator | changed: [testbed-manager] => { 2026-04-11 04:35:03.964382 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:35:03.964394 | orchestrator | } 2026-04-11 04:35:03.964405 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 04:35:03.964417 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:35:03.964428 | orchestrator | } 2026-04-11 04:35:03.964440 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 04:35:03.964451 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:35:03.964462 | orchestrator | } 2026-04-11 04:35:03.964474 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 04:35:03.964486 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:35:03.964497 | orchestrator | } 2026-04-11 04:35:03.964508 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 04:35:03.964519 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:35:03.964531 | orchestrator | } 2026-04-11 04:35:03.964542 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 04:35:03.964553 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:35:03.964564 | orchestrator | } 2026-04-11 04:35:03.964576 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 04:35:03.964587 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:35:03.964600 | orchestrator | } 2026-04-11 04:35:03.964614 | orchestrator | 2026-04-11 04:35:03.964645 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 04:35:03.964659 | orchestrator | Saturday 11 April 2026 04:35:03 +0000 (0:00:01.953) 0:01:24.463 ******** 2026-04-11 04:35:03.964693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:35:03.964719 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:03.964732 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:03.964746 | orchestrator | skipping: [testbed-manager] 2026-04-11 04:35:03.964785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:35:03.964799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:03.964812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:03.964825 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:35:03.964839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:35:03.964867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189690 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:35:09.189712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:35:09.189725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189783 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:35:09.189794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:35:09.189805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189867 | orchestrator | skipping: [testbed-node-3] 2026-04-11 04:35:09.189902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:35:09.189915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189939 | orchestrator | skipping: [testbed-node-4] 2026-04-11 04:35:09.189951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-11 04:35:09.189962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:35:09.189986 | orchestrator | skipping: [testbed-node-5] 2026-04-11 04:35:09.189998 | orchestrator | 2026-04-11 04:35:09.190010 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 04:35:09.190072 | orchestrator | Saturday 11 April 2026 04:35:06 +0000 (0:00:03.067) 0:01:27.531 ******** 2026-04-11 04:35:09.190082 | orchestrator | 2026-04-11 04:35:09.190092 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 04:35:09.190109 | orchestrator | Saturday 11 April 2026 04:35:06 +0000 (0:00:00.470) 0:01:28.002 ******** 2026-04-11 04:35:09.190119 | orchestrator | 2026-04-11 04:35:09.190129 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 04:35:09.190144 | orchestrator | Saturday 11 April 2026 04:35:07 +0000 (0:00:00.452) 0:01:28.454 ******** 2026-04-11 04:35:09.190154 | orchestrator | 2026-04-11 04:35:09.190163 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 04:35:09.190173 | orchestrator | Saturday 11 April 2026 04:35:07 +0000 (0:00:00.449) 0:01:28.904 ******** 2026-04-11 04:35:09.190182 | orchestrator | 2026-04-11 04:35:09.190192 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 04:35:09.190201 | orchestrator | Saturday 11 April 2026 04:35:08 +0000 (0:00:00.481) 0:01:29.385 ******** 2026-04-11 04:35:09.190211 | orchestrator | 2026-04-11 04:35:09.190220 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 04:35:09.190230 | orchestrator | Saturday 11 April 2026 04:35:08 +0000 (0:00:00.446) 0:01:29.832 ******** 2026-04-11 04:35:09.190239 | orchestrator | 2026-04-11 04:35:09.190249 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-11 04:35:09.190265 | orchestrator | Saturday 11 April 2026 04:35:09 +0000 (0:00:00.505) 0:01:30.337 ******** 2026-04-11 04:37:49.718898 | orchestrator | 2026-04-11 04:37:49.719029 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-11 04:37:49.719048 | orchestrator | Saturday 11 April 2026 04:35:10 +0000 (0:00:00.827) 0:01:31.165 ******** 2026-04-11 04:37:49.719060 | orchestrator | changed: [testbed-manager] 2026-04-11 04:37:49.719073 | orchestrator | changed: [testbed-node-3] 2026-04-11 04:37:49.719084 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:37:49.719095 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:37:49.719106 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:37:49.719116 | orchestrator | changed: [testbed-node-4] 2026-04-11 04:37:49.719127 | orchestrator | changed: [testbed-node-5] 2026-04-11 04:37:49.719138 | orchestrator | 2026-04-11 04:37:49.719149 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-11 04:37:49.719160 | orchestrator | Saturday 11 April 2026 04:36:21 +0000 (0:01:11.936) 0:02:43.102 ******** 2026-04-11 04:37:49.719171 | orchestrator | changed: [testbed-manager] 2026-04-11 04:37:49.719181 | orchestrator | changed: [testbed-node-3] 2026-04-11 04:37:49.719192 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:37:49.719203 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:37:49.719213 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:37:49.719224 | orchestrator | changed: [testbed-node-4] 2026-04-11 04:37:49.719234 | orchestrator | changed: [testbed-node-5] 2026-04-11 04:37:49.719246 | orchestrator | 2026-04-11 04:37:49.719256 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-11 04:37:49.719267 | orchestrator | Saturday 11 April 2026 04:37:26 +0000 (0:01:04.494) 0:03:47.597 ******** 2026-04-11 04:37:49.719278 | orchestrator | ok: [testbed-manager] 2026-04-11 04:37:49.719290 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:37:49.719301 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:37:49.719311 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:37:49.719322 | orchestrator | ok: [testbed-node-3] 2026-04-11 04:37:49.719332 | orchestrator | ok: [testbed-node-4] 2026-04-11 04:37:49.719343 | orchestrator | ok: [testbed-node-5] 2026-04-11 04:37:49.719354 | orchestrator | 2026-04-11 04:37:49.719364 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-11 04:37:49.719375 | orchestrator | Saturday 11 April 2026 04:37:29 +0000 (0:00:03.276) 0:03:50.873 ******** 2026-04-11 04:37:49.719386 | orchestrator | changed: [testbed-manager] 2026-04-11 04:37:49.719396 | orchestrator | changed: [testbed-node-3] 2026-04-11 04:37:49.719407 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:37:49.719418 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:37:49.719430 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:37:49.719466 | orchestrator | changed: [testbed-node-4] 2026-04-11 04:37:49.719479 | orchestrator | changed: [testbed-node-5] 2026-04-11 04:37:49.719492 | orchestrator | 2026-04-11 04:37:49.719505 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:37:49.719518 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:37:49.719532 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:37:49.719545 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:37:49.719557 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:37:49.719569 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:37:49.719581 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:37:49.719593 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:37:49.719605 | orchestrator | 2026-04-11 04:37:49.719618 | orchestrator | 2026-04-11 04:37:49.719630 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:37:49.719644 | orchestrator | Saturday 11 April 2026 04:37:49 +0000 (0:00:19.545) 0:04:10.418 ******** 2026-04-11 04:37:49.719656 | orchestrator | =============================================================================== 2026-04-11 04:37:49.719701 | orchestrator | common : Restart fluentd container ------------------------------------- 71.94s 2026-04-11 04:37:49.719715 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 64.49s 2026-04-11 04:37:49.719727 | orchestrator | common : Restart cron container ---------------------------------------- 19.55s 2026-04-11 04:37:49.719754 | orchestrator | common : Copying over config.json files for services -------------------- 7.22s 2026-04-11 04:37:49.719767 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.95s 2026-04-11 04:37:49.719779 | orchestrator | service-check-containers : common | Check containers -------------------- 5.92s 2026-04-11 04:37:49.719793 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.57s 2026-04-11 04:37:49.719804 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.85s 2026-04-11 04:37:49.719815 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.03s 2026-04-11 04:37:49.719826 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.75s 2026-04-11 04:37:49.719836 | orchestrator | common : include_tasks -------------------------------------------------- 3.72s 2026-04-11 04:37:49.719847 | orchestrator | common : Flush handlers ------------------------------------------------- 3.63s 2026-04-11 04:37:49.719874 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.61s 2026-04-11 04:37:49.719886 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.40s 2026-04-11 04:37:49.719896 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.37s 2026-04-11 04:37:49.719907 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.30s 2026-04-11 04:37:49.719918 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.28s 2026-04-11 04:37:49.719928 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.23s 2026-04-11 04:37:49.719939 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.07s 2026-04-11 04:37:49.719949 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.84s 2026-04-11 04:37:49.903224 | orchestrator | + osism apply -a upgrade loadbalancer 2026-04-11 04:37:51.245177 | orchestrator | 2026-04-11 04:37:51 | INFO  | Prepare task for execution of loadbalancer. 2026-04-11 04:37:51.328428 | orchestrator | 2026-04-11 04:37:51 | INFO  | Task 615a30d9-3d1f-4a38-8e03-51825e63e728 (loadbalancer) was prepared for execution. 2026-04-11 04:37:51.328528 | orchestrator | 2026-04-11 04:37:51 | INFO  | It takes a moment until task 615a30d9-3d1f-4a38-8e03-51825e63e728 (loadbalancer) has been started and output is visible here. 2026-04-11 04:38:25.654802 | orchestrator | 2026-04-11 04:38:25.654956 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 04:38:25.654987 | orchestrator | 2026-04-11 04:38:25.655008 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 04:38:25.655030 | orchestrator | Saturday 11 April 2026 04:37:56 +0000 (0:00:01.546) 0:00:01.546 ******** 2026-04-11 04:38:25.655051 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:38:25.655072 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:38:25.655093 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:38:25.655114 | orchestrator | 2026-04-11 04:38:25.655133 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 04:38:25.655154 | orchestrator | Saturday 11 April 2026 04:37:58 +0000 (0:00:02.060) 0:00:03.607 ******** 2026-04-11 04:38:25.655176 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-11 04:38:25.655197 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-11 04:38:25.655217 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-11 04:38:25.655239 | orchestrator | 2026-04-11 04:38:25.655263 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-11 04:38:25.655285 | orchestrator | 2026-04-11 04:38:25.655308 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-11 04:38:25.655330 | orchestrator | Saturday 11 April 2026 04:38:01 +0000 (0:00:02.586) 0:00:06.193 ******** 2026-04-11 04:38:25.655356 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:38:25.655375 | orchestrator | 2026-04-11 04:38:25.655397 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-04-11 04:38:25.655417 | orchestrator | Saturday 11 April 2026 04:38:04 +0000 (0:00:03.050) 0:00:09.244 ******** 2026-04-11 04:38:25.655437 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:38:25.655454 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:38:25.655473 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:38:25.655492 | orchestrator | 2026-04-11 04:38:25.655510 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-04-11 04:38:25.655529 | orchestrator | Saturday 11 April 2026 04:38:06 +0000 (0:00:02.537) 0:00:11.781 ******** 2026-04-11 04:38:25.655548 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:38:25.655566 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:38:25.655584 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:38:25.655602 | orchestrator | 2026-04-11 04:38:25.655621 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-11 04:38:25.655641 | orchestrator | Saturday 11 April 2026 04:38:08 +0000 (0:00:02.276) 0:00:14.058 ******** 2026-04-11 04:38:25.655736 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:38:25.655758 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:38:25.655778 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:38:25.655796 | orchestrator | 2026-04-11 04:38:25.655815 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-11 04:38:25.655827 | orchestrator | Saturday 11 April 2026 04:38:10 +0000 (0:00:01.945) 0:00:16.003 ******** 2026-04-11 04:38:25.655839 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:38:25.655850 | orchestrator | 2026-04-11 04:38:25.655862 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-11 04:38:25.655902 | orchestrator | Saturday 11 April 2026 04:38:12 +0000 (0:00:01.958) 0:00:17.961 ******** 2026-04-11 04:38:25.655914 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:38:25.655925 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:38:25.655936 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:38:25.655947 | orchestrator | 2026-04-11 04:38:25.655958 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-11 04:38:25.655969 | orchestrator | Saturday 11 April 2026 04:38:14 +0000 (0:00:01.794) 0:00:19.756 ******** 2026-04-11 04:38:25.655980 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 04:38:25.655991 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 04:38:25.656002 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-11 04:38:25.656013 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 04:38:25.656024 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 04:38:25.656035 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-11 04:38:25.656045 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 04:38:25.656058 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 04:38:25.656070 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-11 04:38:25.656081 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 04:38:25.656111 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 04:38:25.656123 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-11 04:38:25.656134 | orchestrator | 2026-04-11 04:38:25.656145 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 04:38:25.656156 | orchestrator | Saturday 11 April 2026 04:38:18 +0000 (0:00:04.148) 0:00:23.904 ******** 2026-04-11 04:38:25.656167 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-11 04:38:25.656179 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-11 04:38:25.656190 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-11 04:38:25.656201 | orchestrator | 2026-04-11 04:38:25.656212 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 04:38:25.656246 | orchestrator | Saturday 11 April 2026 04:38:20 +0000 (0:00:01.679) 0:00:25.584 ******** 2026-04-11 04:38:25.656258 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-11 04:38:25.656269 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-11 04:38:25.656280 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-11 04:38:25.656291 | orchestrator | 2026-04-11 04:38:25.656302 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 04:38:25.656313 | orchestrator | Saturday 11 April 2026 04:38:22 +0000 (0:00:02.127) 0:00:27.711 ******** 2026-04-11 04:38:25.656324 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-11 04:38:25.656335 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:38:25.656346 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-11 04:38:25.656357 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:38:25.656368 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-11 04:38:25.656378 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:38:25.656389 | orchestrator | 2026-04-11 04:38:25.656400 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-11 04:38:25.656411 | orchestrator | Saturday 11 April 2026 04:38:24 +0000 (0:00:01.943) 0:00:29.655 ******** 2026-04-11 04:38:25.656425 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:25.656457 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:25.656476 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:25.656488 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:25.656499 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:25.656519 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:37.268209 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:38:37.268345 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:38:37.268362 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:38:37.268375 | orchestrator | 2026-04-11 04:38:37.268388 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-11 04:38:37.268414 | orchestrator | Saturday 11 April 2026 04:38:27 +0000 (0:00:02.701) 0:00:32.357 ******** 2026-04-11 04:38:37.268425 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:38:37.268437 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:38:37.268448 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:38:37.268459 | orchestrator | 2026-04-11 04:38:37.268470 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-11 04:38:37.268481 | orchestrator | Saturday 11 April 2026 04:38:29 +0000 (0:00:02.272) 0:00:34.629 ******** 2026-04-11 04:38:37.268491 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-04-11 04:38:37.268503 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-04-11 04:38:37.268514 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-04-11 04:38:37.268525 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-04-11 04:38:37.268535 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-04-11 04:38:37.268547 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-04-11 04:38:37.268558 | orchestrator | 2026-04-11 04:38:37.268569 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-11 04:38:37.268579 | orchestrator | Saturday 11 April 2026 04:38:32 +0000 (0:00:02.634) 0:00:37.263 ******** 2026-04-11 04:38:37.268590 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:38:37.268601 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:38:37.268611 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:38:37.268622 | orchestrator | 2026-04-11 04:38:37.268633 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-11 04:38:37.268644 | orchestrator | Saturday 11 April 2026 04:38:34 +0000 (0:00:01.929) 0:00:39.192 ******** 2026-04-11 04:38:37.268681 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:38:37.268692 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:38:37.268703 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:38:37.268713 | orchestrator | 2026-04-11 04:38:37.268724 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-11 04:38:37.268738 | orchestrator | Saturday 11 April 2026 04:38:36 +0000 (0:00:02.363) 0:00:41.556 ******** 2026-04-11 04:38:37.268752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 04:38:37.268793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:38:37.268808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:38:37.268822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 04:38:37.268835 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:38:37.268856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 04:38:37.268870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:38:37.268883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:38:37.268903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 04:38:37.268916 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:38:37.268937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 04:38:41.017629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:38:41.017835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:38:41.017866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 04:38:41.017888 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:38:41.017910 | orchestrator | 2026-04-11 04:38:41.017931 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-11 04:38:41.017952 | orchestrator | Saturday 11 April 2026 04:38:38 +0000 (0:00:02.000) 0:00:43.556 ******** 2026-04-11 04:38:41.017971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:41.018089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:41.018110 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:41.018154 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:41.018182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:38:41.018204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 04:38:41.018225 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:41.018260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:38:41.018281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 04:38:41.018315 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:54.805935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:38:54.806102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d', '__omit_place_holder__bef860233c0760a47d2570a9e6c09439614b185d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-11 04:38:54.806123 | orchestrator | 2026-04-11 04:38:54.806138 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-11 04:38:54.806151 | orchestrator | Saturday 11 April 2026 04:38:42 +0000 (0:00:03.801) 0:00:47.358 ******** 2026-04-11 04:38:54.806204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:54.806219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:54.806231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 04:38:54.806243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:54.806273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:54.806291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:38:54.806310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:38:54.806322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:38:54.806334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:38:54.806345 | orchestrator | 2026-04-11 04:38:54.806357 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-11 04:38:54.806368 | orchestrator | Saturday 11 April 2026 04:38:46 +0000 (0:00:04.508) 0:00:51.867 ******** 2026-04-11 04:38:54.806379 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 04:38:54.806391 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 04:38:54.806402 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-11 04:38:54.806413 | orchestrator | 2026-04-11 04:38:54.806424 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-11 04:38:54.806435 | orchestrator | Saturday 11 April 2026 04:38:49 +0000 (0:00:02.941) 0:00:54.808 ******** 2026-04-11 04:38:54.806446 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 04:38:54.806457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 04:38:54.806470 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-11 04:38:54.806483 | orchestrator | 2026-04-11 04:38:54.806496 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-11 04:38:54.806508 | orchestrator | Saturday 11 April 2026 04:38:54 +0000 (0:00:04.621) 0:00:59.430 ******** 2026-04-11 04:38:54.806523 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:38:54.806537 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:38:54.806558 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:15.509348 | orchestrator | 2026-04-11 04:39:15.509473 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-11 04:39:15.509489 | orchestrator | Saturday 11 April 2026 04:38:55 +0000 (0:00:01.685) 0:01:01.116 ******** 2026-04-11 04:39:15.509500 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 04:39:15.509511 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 04:39:15.509521 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-11 04:39:15.509554 | orchestrator | 2026-04-11 04:39:15.509619 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-11 04:39:15.509632 | orchestrator | Saturday 11 April 2026 04:38:59 +0000 (0:00:03.200) 0:01:04.316 ******** 2026-04-11 04:39:15.509694 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 04:39:15.509708 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 04:39:15.509718 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-11 04:39:15.509727 | orchestrator | 2026-04-11 04:39:15.509737 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-11 04:39:15.509747 | orchestrator | Saturday 11 April 2026 04:39:02 +0000 (0:00:02.913) 0:01:07.230 ******** 2026-04-11 04:39:15.509756 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:39:15.509766 | orchestrator | 2026-04-11 04:39:15.509775 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-11 04:39:15.509785 | orchestrator | Saturday 11 April 2026 04:39:03 +0000 (0:00:01.691) 0:01:08.921 ******** 2026-04-11 04:39:15.509795 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-04-11 04:39:15.509805 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-04-11 04:39:15.509815 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-04-11 04:39:15.509825 | orchestrator | 2026-04-11 04:39:15.509834 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-11 04:39:15.509844 | orchestrator | Saturday 11 April 2026 04:39:06 +0000 (0:00:02.605) 0:01:11.527 ******** 2026-04-11 04:39:15.509853 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-11 04:39:15.509863 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-11 04:39:15.509873 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-11 04:39:15.509882 | orchestrator | 2026-04-11 04:39:15.509892 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-11 04:39:15.509903 | orchestrator | Saturday 11 April 2026 04:39:09 +0000 (0:00:02.941) 0:01:14.468 ******** 2026-04-11 04:39:15.509915 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:15.509927 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:15.509939 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:15.509950 | orchestrator | 2026-04-11 04:39:15.509961 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-11 04:39:15.509972 | orchestrator | Saturday 11 April 2026 04:39:10 +0000 (0:00:01.317) 0:01:15.785 ******** 2026-04-11 04:39:15.509984 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:15.509994 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:15.510005 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:15.510070 | orchestrator | 2026-04-11 04:39:15.510083 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-11 04:39:15.510094 | orchestrator | Saturday 11 April 2026 04:39:12 +0000 (0:00:01.698) 0:01:17.484 ******** 2026-04-11 04:39:15.510109 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 04:39:15.510125 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 04:39:15.510166 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 04:39:15.510195 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:39:15.510208 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:39:15.510219 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:39:15.510232 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:39:15.510246 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:39:15.510271 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:39:19.139493 | orchestrator | 2026-04-11 04:39:19.139565 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-11 04:39:19.139572 | orchestrator | Saturday 11 April 2026 04:39:16 +0000 (0:00:04.270) 0:01:21.754 ******** 2026-04-11 04:39:19.139693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 04:39:19.139708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:19.139715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:19.139721 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:19.139728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 04:39:19.139734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:19.139759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:19.139766 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:19.139786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 04:39:19.139797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:19.139804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:19.139810 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:19.139817 | orchestrator | 2026-04-11 04:39:19.139823 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-11 04:39:19.139830 | orchestrator | Saturday 11 April 2026 04:39:18 +0000 (0:00:02.087) 0:01:23.842 ******** 2026-04-11 04:39:19.139837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 04:39:19.139844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:19.139852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:19.139856 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:19.139865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 04:39:29.774399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:29.774512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:29.774530 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:29.774545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 04:39:29.774557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:29.774593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:29.774605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:29.774616 | orchestrator | 2026-04-11 04:39:29.774628 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-11 04:39:29.774706 | orchestrator | Saturday 11 April 2026 04:39:20 +0000 (0:00:01.692) 0:01:25.535 ******** 2026-04-11 04:39:29.774721 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 04:39:29.774733 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 04:39:29.774744 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-11 04:39:29.774755 | orchestrator | 2026-04-11 04:39:29.774766 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-11 04:39:29.774783 | orchestrator | Saturday 11 April 2026 04:39:23 +0000 (0:00:02.714) 0:01:28.250 ******** 2026-04-11 04:39:29.774810 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 04:39:29.774831 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 04:39:29.774850 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-11 04:39:29.774867 | orchestrator | 2026-04-11 04:39:29.774909 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-11 04:39:29.774930 | orchestrator | Saturday 11 April 2026 04:39:25 +0000 (0:00:02.445) 0:01:30.696 ******** 2026-04-11 04:39:29.774949 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 04:39:29.775422 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 04:39:29.775448 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 04:39:29.775459 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 04:39:29.775470 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:29.775481 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 04:39:29.775492 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:29.775503 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 04:39:29.775513 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:29.775524 | orchestrator | 2026-04-11 04:39:29.775535 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-11 04:39:29.775546 | orchestrator | Saturday 11 April 2026 04:39:27 +0000 (0:00:02.235) 0:01:32.931 ******** 2026-04-11 04:39:29.775558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 04:39:29.775585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 04:39:29.775597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 04:39:29.775609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:39:29.775670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:39:33.879539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:39:33.879711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:39:33.879758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:39:33.879772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:39:33.879784 | orchestrator | 2026-04-11 04:39:33.879797 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-11 04:39:33.879810 | orchestrator | Saturday 11 April 2026 04:39:31 +0000 (0:00:03.917) 0:01:36.848 ******** 2026-04-11 04:39:33.879822 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 04:39:33.879834 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:39:33.879845 | orchestrator | } 2026-04-11 04:39:33.879858 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 04:39:33.879869 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:39:33.879880 | orchestrator | } 2026-04-11 04:39:33.879890 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 04:39:33.879901 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:39:33.879912 | orchestrator | } 2026-04-11 04:39:33.879922 | orchestrator | 2026-04-11 04:39:33.879934 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 04:39:33.879945 | orchestrator | Saturday 11 April 2026 04:39:33 +0000 (0:00:01.649) 0:01:38.498 ******** 2026-04-11 04:39:33.879956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 04:39:33.880001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:33.880015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:33.880035 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:33.880046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 04:39:33.880058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:33.880071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:33.880084 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:33.880096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 04:39:33.880109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:39:33.880137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:39:40.741268 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:40.741377 | orchestrator | 2026-04-11 04:39:40.741394 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-11 04:39:40.741406 | orchestrator | Saturday 11 April 2026 04:39:35 +0000 (0:00:01.866) 0:01:40.365 ******** 2026-04-11 04:39:40.741417 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:39:40.741428 | orchestrator | 2026-04-11 04:39:40.741440 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-11 04:39:40.741451 | orchestrator | Saturday 11 April 2026 04:39:37 +0000 (0:00:01.922) 0:01:42.287 ******** 2026-04-11 04:39:40.741466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:39:40.741483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 04:39:40.741496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:40.741509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 04:39:40.741556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:39:40.741591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 04:39:40.741603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:40.741614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 04:39:40.741626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:39:40.741674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 04:39:40.741706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:42.621391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 04:39:42.621503 | orchestrator | 2026-04-11 04:39:42.621521 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-11 04:39:42.621535 | orchestrator | Saturday 11 April 2026 04:39:41 +0000 (0:00:04.705) 0:01:46.993 ******** 2026-04-11 04:39:42.621548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:39:42.621565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 04:39:42.621578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:42.621615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 04:39:42.621715 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:42.621757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:39:42.621771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 04:39:42.621782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:42.621795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 04:39:42.621809 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:42.621824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:39:42.621846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 04:39:42.621868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:56.605999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 04:39:56.606201 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:56.606216 | orchestrator | 2026-04-11 04:39:56.606225 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-11 04:39:56.606234 | orchestrator | Saturday 11 April 2026 04:39:43 +0000 (0:00:01.927) 0:01:48.921 ******** 2026-04-11 04:39:56.606243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:39:56.606329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:39:56.606345 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:56.606353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:39:56.606361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:39:56.606384 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:56.606392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:39:56.606399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:39:56.606406 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:39:56.606413 | orchestrator | 2026-04-11 04:39:56.606421 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-11 04:39:56.606429 | orchestrator | Saturday 11 April 2026 04:39:46 +0000 (0:00:02.373) 0:01:51.295 ******** 2026-04-11 04:39:56.606436 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:39:56.606444 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:39:56.606451 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:39:56.606458 | orchestrator | 2026-04-11 04:39:56.606465 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-11 04:39:56.606472 | orchestrator | Saturday 11 April 2026 04:39:48 +0000 (0:00:02.225) 0:01:53.520 ******** 2026-04-11 04:39:56.606479 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:39:56.606486 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:39:56.606493 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:39:56.606500 | orchestrator | 2026-04-11 04:39:56.606508 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-11 04:39:56.606516 | orchestrator | Saturday 11 April 2026 04:39:51 +0000 (0:00:02.877) 0:01:56.398 ******** 2026-04-11 04:39:56.606527 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:39:56.606536 | orchestrator | 2026-04-11 04:39:56.606544 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-11 04:39:56.606552 | orchestrator | Saturday 11 April 2026 04:39:53 +0000 (0:00:01.784) 0:01:58.182 ******** 2026-04-11 04:39:56.606582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:39:56.606594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:56.606604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:39:56.606620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:39:56.606633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:56.606701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:39:56.606719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:39:58.601327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:58.601456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:39:58.601482 | orchestrator | 2026-04-11 04:39:58.601502 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-11 04:39:58.601521 | orchestrator | Saturday 11 April 2026 04:39:57 +0000 (0:00:04.745) 0:02:02.927 ******** 2026-04-11 04:39:58.601561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:39:58.601583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:58.601597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:39:58.601628 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:39:58.601730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:39:58.601744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:58.601755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:39:58.601770 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:39:58.601781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:39:58.601792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 04:39:58.601817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:40:14.365375 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:14.365488 | orchestrator | 2026-04-11 04:40:14.365505 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-11 04:40:14.365518 | orchestrator | Saturday 11 April 2026 04:39:59 +0000 (0:00:01.970) 0:02:04.898 ******** 2026-04-11 04:40:14.365530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:14.365545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:14.365557 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:14.365568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:14.365580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:14.365591 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:14.365602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:14.365630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:14.365686 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:14.365698 | orchestrator | 2026-04-11 04:40:14.365709 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-11 04:40:14.365720 | orchestrator | Saturday 11 April 2026 04:40:01 +0000 (0:00:01.656) 0:02:06.554 ******** 2026-04-11 04:40:14.365731 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:40:14.365742 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:40:14.365753 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:40:14.365767 | orchestrator | 2026-04-11 04:40:14.365785 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-11 04:40:14.365805 | orchestrator | Saturday 11 April 2026 04:40:03 +0000 (0:00:02.240) 0:02:08.795 ******** 2026-04-11 04:40:14.365821 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:40:14.365837 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:40:14.365883 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:40:14.365904 | orchestrator | 2026-04-11 04:40:14.365923 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-11 04:40:14.365944 | orchestrator | Saturday 11 April 2026 04:40:06 +0000 (0:00:02.889) 0:02:11.685 ******** 2026-04-11 04:40:14.365963 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:14.365977 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:14.365988 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:14.365998 | orchestrator | 2026-04-11 04:40:14.366009 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-11 04:40:14.366079 | orchestrator | Saturday 11 April 2026 04:40:08 +0000 (0:00:01.528) 0:02:13.214 ******** 2026-04-11 04:40:14.366090 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:40:14.366101 | orchestrator | 2026-04-11 04:40:14.366112 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-11 04:40:14.366122 | orchestrator | Saturday 11 April 2026 04:40:09 +0000 (0:00:01.346) 0:02:14.561 ******** 2026-04-11 04:40:14.366135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 04:40:14.366173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 04:40:14.366186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-11 04:40:14.366198 | orchestrator | 2026-04-11 04:40:14.366209 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-11 04:40:14.366221 | orchestrator | Saturday 11 April 2026 04:40:12 +0000 (0:00:03.524) 0:02:18.085 ******** 2026-04-11 04:40:14.366240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 04:40:14.366262 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:14.366274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 04:40:14.366286 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:14.366305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-11 04:40:27.236461 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:27.236566 | orchestrator | 2026-04-11 04:40:27.236581 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-11 04:40:27.236593 | orchestrator | Saturday 11 April 2026 04:40:15 +0000 (0:00:02.593) 0:02:20.679 ******** 2026-04-11 04:40:27.236605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 04:40:27.236618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 04:40:27.236629 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:27.236692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 04:40:27.236744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 04:40:27.236755 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:27.236766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 04:40:27.236776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-11 04:40:27.236786 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:27.236796 | orchestrator | 2026-04-11 04:40:27.236806 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-11 04:40:27.236816 | orchestrator | Saturday 11 April 2026 04:40:18 +0000 (0:00:02.706) 0:02:23.385 ******** 2026-04-11 04:40:27.236826 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:27.236836 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:27.236845 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:27.236855 | orchestrator | 2026-04-11 04:40:27.236865 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-11 04:40:27.236874 | orchestrator | Saturday 11 April 2026 04:40:20 +0000 (0:00:01.799) 0:02:25.185 ******** 2026-04-11 04:40:27.236884 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:27.236893 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:27.236903 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:27.236913 | orchestrator | 2026-04-11 04:40:27.236922 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-11 04:40:27.236932 | orchestrator | Saturday 11 April 2026 04:40:22 +0000 (0:00:02.097) 0:02:27.282 ******** 2026-04-11 04:40:27.236942 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:40:27.236952 | orchestrator | 2026-04-11 04:40:27.236961 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-11 04:40:27.236973 | orchestrator | Saturday 11 April 2026 04:40:23 +0000 (0:00:01.568) 0:02:28.851 ******** 2026-04-11 04:40:27.237006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:40:27.237031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:40:27.237049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 04:40:27.237062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 04:40:27.237075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:40:27.237095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:40:29.123429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123511 | orchestrator | 2026-04-11 04:40:29.123524 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-11 04:40:29.123536 | orchestrator | Saturday 11 April 2026 04:40:28 +0000 (0:00:04.937) 0:02:33.788 ******** 2026-04-11 04:40:29.123554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:40:29.123567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 04:40:29.123609 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:29.123692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:40:39.492924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:40:39.493038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 04:40:39.493054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 04:40:39.493067 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:39.493084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:40:39.493121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:40:39.493160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 04:40:39.493173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 04:40:39.493185 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:39.493197 | orchestrator | 2026-04-11 04:40:39.493209 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-11 04:40:39.493221 | orchestrator | Saturday 11 April 2026 04:40:30 +0000 (0:00:01.946) 0:02:35.735 ******** 2026-04-11 04:40:39.493233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:39.493246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:39.493259 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:39.493270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:39.493281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:39.493300 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:39.493311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:39.493322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:40:39.493333 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:39.493344 | orchestrator | 2026-04-11 04:40:39.493355 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-11 04:40:39.493366 | orchestrator | Saturday 11 April 2026 04:40:32 +0000 (0:00:01.753) 0:02:37.488 ******** 2026-04-11 04:40:39.493377 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:40:39.493388 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:40:39.493399 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:40:39.493409 | orchestrator | 2026-04-11 04:40:39.493420 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-11 04:40:39.493432 | orchestrator | Saturday 11 April 2026 04:40:34 +0000 (0:00:02.476) 0:02:39.965 ******** 2026-04-11 04:40:39.493445 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:40:39.493457 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:40:39.493469 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:40:39.493482 | orchestrator | 2026-04-11 04:40:39.493495 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-11 04:40:39.493507 | orchestrator | Saturday 11 April 2026 04:40:37 +0000 (0:00:02.877) 0:02:42.842 ******** 2026-04-11 04:40:39.493519 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:39.493532 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:39.493544 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:39.493557 | orchestrator | 2026-04-11 04:40:39.493569 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-11 04:40:39.493582 | orchestrator | Saturday 11 April 2026 04:40:39 +0000 (0:00:01.546) 0:02:44.389 ******** 2026-04-11 04:40:39.493595 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:39.493608 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:40:39.493628 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:40:45.969704 | orchestrator | 2026-04-11 04:40:45.969837 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-11 04:40:45.969865 | orchestrator | Saturday 11 April 2026 04:40:40 +0000 (0:00:01.340) 0:02:45.729 ******** 2026-04-11 04:40:45.969883 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:40:45.969901 | orchestrator | 2026-04-11 04:40:45.969939 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-11 04:40:45.969959 | orchestrator | Saturday 11 April 2026 04:40:42 +0000 (0:00:01.825) 0:02:47.554 ******** 2026-04-11 04:40:45.969984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:40:45.970112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 04:40:45.970129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 04:40:45.970141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 04:40:45.970155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 04:40:45.970200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:40:45.970215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 04:40:45.970236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:40:45.970258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 04:40:45.970278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 04:40:45.970317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:40:47.750434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 04:40:47.750591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 04:40:47.750863 | orchestrator | 2026-04-11 04:40:47.750885 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-11 04:40:47.750906 | orchestrator | Saturday 11 April 2026 04:40:47 +0000 (0:00:04.841) 0:02:52.395 ******** 2026-04-11 04:40:47.750922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:40:47.750963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 04:40:48.205605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.205809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.205827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.205840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.205851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.205864 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:40:48.205903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:40:48.205942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 04:40:48.205962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.205982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.206099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.206117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:40:48.206148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:41:04.256253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 04:41:04.256374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 04:41:04.256393 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:04.256409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 04:41:04.256422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 04:41:04.256434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 04:41:04.256463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:41:04.256519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-11 04:41:04.256533 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:04.256544 | orchestrator | 2026-04-11 04:41:04.256557 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-11 04:41:04.256569 | orchestrator | Saturday 11 April 2026 04:40:49 +0000 (0:00:02.233) 0:02:54.629 ******** 2026-04-11 04:41:04.256580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:04.256594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:04.256607 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:04.256618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:04.256629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:04.256724 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:04.256736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:04.256748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:04.256759 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:04.256770 | orchestrator | 2026-04-11 04:41:04.256782 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-11 04:41:04.256795 | orchestrator | Saturday 11 April 2026 04:40:51 +0000 (0:00:01.837) 0:02:56.467 ******** 2026-04-11 04:41:04.256808 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:41:04.256822 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:41:04.256834 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:41:04.256847 | orchestrator | 2026-04-11 04:41:04.256859 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-11 04:41:04.256872 | orchestrator | Saturday 11 April 2026 04:40:53 +0000 (0:00:02.185) 0:02:58.652 ******** 2026-04-11 04:41:04.256884 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:41:04.256897 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:41:04.256919 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:41:04.256932 | orchestrator | 2026-04-11 04:41:04.256946 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-11 04:41:04.256959 | orchestrator | Saturday 11 April 2026 04:40:56 +0000 (0:00:03.077) 0:03:01.730 ******** 2026-04-11 04:41:04.256971 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:04.256984 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:04.256996 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:04.257008 | orchestrator | 2026-04-11 04:41:04.257021 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-11 04:41:04.257033 | orchestrator | Saturday 11 April 2026 04:40:58 +0000 (0:00:01.591) 0:03:03.322 ******** 2026-04-11 04:41:04.257046 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:41:04.257059 | orchestrator | 2026-04-11 04:41:04.257071 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-11 04:41:04.257084 | orchestrator | Saturday 11 April 2026 04:40:59 +0000 (0:00:01.672) 0:03:04.995 ******** 2026-04-11 04:41:04.257117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 04:41:04.452187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 04:41:04.452322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 04:41:04.452360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 04:41:04.452386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 04:41:04.452407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 04:41:08.649498 | orchestrator | 2026-04-11 04:41:08.649583 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-11 04:41:08.649595 | orchestrator | Saturday 11 April 2026 04:41:05 +0000 (0:00:05.703) 0:03:10.699 ******** 2026-04-11 04:41:08.649619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 04:41:08.649632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 04:41:08.649705 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:08.649734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 04:41:08.649744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 04:41:08.649752 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:08.649767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 04:41:26.316389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-11 04:41:26.316480 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:26.316492 | orchestrator | 2026-04-11 04:41:26.316500 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-11 04:41:26.316508 | orchestrator | Saturday 11 April 2026 04:41:09 +0000 (0:00:04.205) 0:03:14.905 ******** 2026-04-11 04:41:26.316516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 04:41:26.316541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 04:41:26.316549 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:26.316556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 04:41:26.316582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 04:41:26.316590 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:26.316597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 04:41:26.316604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-11 04:41:26.316611 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:26.316618 | orchestrator | 2026-04-11 04:41:26.316625 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-11 04:41:26.316631 | orchestrator | Saturday 11 April 2026 04:41:14 +0000 (0:00:04.431) 0:03:19.337 ******** 2026-04-11 04:41:26.316698 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:41:26.316710 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:41:26.316719 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:41:26.316729 | orchestrator | 2026-04-11 04:41:26.316740 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-11 04:41:26.316761 | orchestrator | Saturday 11 April 2026 04:41:16 +0000 (0:00:02.644) 0:03:21.981 ******** 2026-04-11 04:41:26.316772 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:41:26.316783 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:41:26.316791 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:41:26.316797 | orchestrator | 2026-04-11 04:41:26.316804 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-11 04:41:26.316810 | orchestrator | Saturday 11 April 2026 04:41:19 +0000 (0:00:03.010) 0:03:24.992 ******** 2026-04-11 04:41:26.316816 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:26.316823 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:26.316829 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:26.316835 | orchestrator | 2026-04-11 04:41:26.316841 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-11 04:41:26.316848 | orchestrator | Saturday 11 April 2026 04:41:21 +0000 (0:00:01.457) 0:03:26.450 ******** 2026-04-11 04:41:26.316854 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:41:26.316860 | orchestrator | 2026-04-11 04:41:26.316866 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-11 04:41:26.316872 | orchestrator | Saturday 11 April 2026 04:41:23 +0000 (0:00:01.950) 0:03:28.401 ******** 2026-04-11 04:41:26.316879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:41:26.316895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:41:42.567487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:41:42.567606 | orchestrator | 2026-04-11 04:41:42.567623 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-11 04:41:42.567637 | orchestrator | Saturday 11 April 2026 04:41:27 +0000 (0:00:04.362) 0:03:32.763 ******** 2026-04-11 04:41:42.567694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:41:42.567729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:41:42.567741 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:42.567754 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:42.567766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:41:42.567777 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:42.567787 | orchestrator | 2026-04-11 04:41:42.567798 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-11 04:41:42.567810 | orchestrator | Saturday 11 April 2026 04:41:29 +0000 (0:00:01.449) 0:03:34.213 ******** 2026-04-11 04:41:42.567822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:42.567837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:42.567850 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:42.567884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:42.567896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:42.567908 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:42.567919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:42.567938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:41:42.567949 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:42.567960 | orchestrator | 2026-04-11 04:41:42.567971 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-11 04:41:42.567982 | orchestrator | Saturday 11 April 2026 04:41:30 +0000 (0:00:01.646) 0:03:35.860 ******** 2026-04-11 04:41:42.567993 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:41:42.568006 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:41:42.568019 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:41:42.568031 | orchestrator | 2026-04-11 04:41:42.568043 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-11 04:41:42.568055 | orchestrator | Saturday 11 April 2026 04:41:32 +0000 (0:00:02.214) 0:03:38.074 ******** 2026-04-11 04:41:42.568068 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:41:42.568080 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:41:42.568092 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:41:42.568104 | orchestrator | 2026-04-11 04:41:42.568117 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-11 04:41:42.568129 | orchestrator | Saturday 11 April 2026 04:41:35 +0000 (0:00:02.853) 0:03:40.927 ******** 2026-04-11 04:41:42.568142 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:42.568183 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:42.568196 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:42.568209 | orchestrator | 2026-04-11 04:41:42.568221 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-11 04:41:42.568233 | orchestrator | Saturday 11 April 2026 04:41:37 +0000 (0:00:01.402) 0:03:42.330 ******** 2026-04-11 04:41:42.568245 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:41:42.568257 | orchestrator | 2026-04-11 04:41:42.568271 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-11 04:41:42.568283 | orchestrator | Saturday 11 April 2026 04:41:39 +0000 (0:00:01.985) 0:03:44.316 ******** 2026-04-11 04:41:42.568318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 04:41:44.718234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 04:41:44.718384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 04:41:44.718427 | orchestrator | 2026-04-11 04:41:44.718441 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-11 04:41:44.718454 | orchestrator | Saturday 11 April 2026 04:41:43 +0000 (0:00:04.735) 0:03:49.051 ******** 2026-04-11 04:41:44.718468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 04:41:44.718482 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:44.718514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 04:41:54.921937 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:54.922062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 04:41:54.922088 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:54.922094 | orchestrator | 2026-04-11 04:41:54.922099 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-11 04:41:54.922104 | orchestrator | Saturday 11 April 2026 04:41:45 +0000 (0:00:02.049) 0:03:51.100 ******** 2026-04-11 04:41:54.922109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 04:41:54.922125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 04:41:54.922132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 04:41:54.922138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 04:41:54.922143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 04:41:54.922148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:54.922162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 04:41:54.922167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 04:41:54.922171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 04:41:54.922175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 04:41:54.922179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 04:41:54.922183 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:54.922187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 04:41:54.922194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 04:41:54.922198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-11 04:41:54.922205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-11 04:41:54.922209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-11 04:41:54.922213 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:54.922217 | orchestrator | 2026-04-11 04:41:54.922221 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-11 04:41:54.922225 | orchestrator | Saturday 11 April 2026 04:41:48 +0000 (0:00:02.147) 0:03:53.247 ******** 2026-04-11 04:41:54.922229 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:41:54.922233 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:41:54.922237 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:41:54.922241 | orchestrator | 2026-04-11 04:41:54.922245 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-11 04:41:54.922248 | orchestrator | Saturday 11 April 2026 04:41:50 +0000 (0:00:02.178) 0:03:55.426 ******** 2026-04-11 04:41:54.922252 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:41:54.922256 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:41:54.922260 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:41:54.922263 | orchestrator | 2026-04-11 04:41:54.922267 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-11 04:41:54.922271 | orchestrator | Saturday 11 April 2026 04:41:53 +0000 (0:00:02.890) 0:03:58.316 ******** 2026-04-11 04:41:54.922275 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:41:54.922279 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:41:54.922283 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:41:54.922286 | orchestrator | 2026-04-11 04:41:54.922290 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-11 04:41:54.922294 | orchestrator | Saturday 11 April 2026 04:41:54 +0000 (0:00:01.653) 0:03:59.970 ******** 2026-04-11 04:41:54.922300 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:03.510293 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:03.510404 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:03.510420 | orchestrator | 2026-04-11 04:42:03.510433 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-11 04:42:03.510445 | orchestrator | Saturday 11 April 2026 04:41:56 +0000 (0:00:01.403) 0:04:01.373 ******** 2026-04-11 04:42:03.510456 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:42:03.510467 | orchestrator | 2026-04-11 04:42:03.510478 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-11 04:42:03.510489 | orchestrator | Saturday 11 April 2026 04:41:58 +0000 (0:00:02.142) 0:04:03.516 ******** 2026-04-11 04:42:03.510505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 04:42:03.510549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 04:42:03.510563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 04:42:03.510589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 04:42:03.510621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 04:42:03.510642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 04:42:03.510654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 04:42:03.510666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 04:42:03.510735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 04:42:03.510748 | orchestrator | 2026-04-11 04:42:03.510759 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-11 04:42:03.510771 | orchestrator | Saturday 11 April 2026 04:42:03 +0000 (0:00:04.802) 0:04:08.318 ******** 2026-04-11 04:42:03.510792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 04:42:06.717265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 04:42:06.717362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 04:42:06.717377 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:06.717408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 04:42:06.717422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 04:42:06.717433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 04:42:06.717464 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:06.717494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 04:42:06.717506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 04:42:06.717516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 04:42:06.717526 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:06.717537 | orchestrator | 2026-04-11 04:42:06.717547 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-11 04:42:06.717558 | orchestrator | Saturday 11 April 2026 04:42:04 +0000 (0:00:01.687) 0:04:10.006 ******** 2026-04-11 04:42:06.717575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 04:42:06.717588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 04:42:06.717599 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:06.717610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 04:42:06.717620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 04:42:06.717637 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:06.717647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 04:42:06.717657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-11 04:42:06.717667 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:06.717721 | orchestrator | 2026-04-11 04:42:06.717732 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-11 04:42:06.717750 | orchestrator | Saturday 11 April 2026 04:42:06 +0000 (0:00:01.877) 0:04:11.884 ******** 2026-04-11 04:42:20.591020 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:42:20.591119 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:42:20.591130 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:42:20.591139 | orchestrator | 2026-04-11 04:42:20.591148 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-11 04:42:20.591157 | orchestrator | Saturday 11 April 2026 04:42:09 +0000 (0:00:02.319) 0:04:14.203 ******** 2026-04-11 04:42:20.591166 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:42:20.591174 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:42:20.591182 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:42:20.591189 | orchestrator | 2026-04-11 04:42:20.591197 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-11 04:42:20.591206 | orchestrator | Saturday 11 April 2026 04:42:11 +0000 (0:00:02.890) 0:04:17.094 ******** 2026-04-11 04:42:20.591214 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:20.591223 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:20.591231 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:20.591239 | orchestrator | 2026-04-11 04:42:20.591247 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-11 04:42:20.591255 | orchestrator | Saturday 11 April 2026 04:42:13 +0000 (0:00:01.419) 0:04:18.513 ******** 2026-04-11 04:42:20.591263 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:42:20.591271 | orchestrator | 2026-04-11 04:42:20.591279 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-11 04:42:20.591287 | orchestrator | Saturday 11 April 2026 04:42:15 +0000 (0:00:02.083) 0:04:20.597 ******** 2026-04-11 04:42:20.591300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:42:20.591326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:42:20.591356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:42:20.591381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:42:20.591390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:42:20.591403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:42:20.591419 | orchestrator | 2026-04-11 04:42:20.591427 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-11 04:42:20.591437 | orchestrator | Saturday 11 April 2026 04:42:20 +0000 (0:00:04.770) 0:04:25.368 ******** 2026-04-11 04:42:20.591445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:42:20.591460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:42:35.001376 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:35.001565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:42:35.001588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:42:35.001637 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:35.001652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:42:35.001665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:42:35.001676 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:35.001688 | orchestrator | 2026-04-11 04:42:35.001807 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-11 04:42:35.001828 | orchestrator | Saturday 11 April 2026 04:42:22 +0000 (0:00:02.063) 0:04:27.431 ******** 2026-04-11 04:42:35.001873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:35.001898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:35.001920 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:35.001942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:35.001957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:35.001970 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:35.001984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:35.001997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:35.002163 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:35.002181 | orchestrator | 2026-04-11 04:42:35.002195 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-11 04:42:35.002207 | orchestrator | Saturday 11 April 2026 04:42:24 +0000 (0:00:02.112) 0:04:29.543 ******** 2026-04-11 04:42:35.002220 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:42:35.002235 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:42:35.002248 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:42:35.002259 | orchestrator | 2026-04-11 04:42:35.002270 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-11 04:42:35.002281 | orchestrator | Saturday 11 April 2026 04:42:26 +0000 (0:00:02.236) 0:04:31.780 ******** 2026-04-11 04:42:35.002292 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:42:35.002303 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:42:35.002313 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:42:35.002324 | orchestrator | 2026-04-11 04:42:35.002342 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-11 04:42:35.002353 | orchestrator | Saturday 11 April 2026 04:42:29 +0000 (0:00:02.946) 0:04:34.727 ******** 2026-04-11 04:42:35.002364 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:42:35.002375 | orchestrator | 2026-04-11 04:42:35.002386 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-11 04:42:35.002397 | orchestrator | Saturday 11 April 2026 04:42:31 +0000 (0:00:02.062) 0:04:36.789 ******** 2026-04-11 04:42:35.002410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:42:35.002424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:42:35.002462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 04:42:36.764160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 04:42:36.764296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:42:36.764315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:42:36.764326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:42:36.764337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:42:36.764366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 04:42:36.764386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 04:42:36.764401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 04:42:36.764412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 04:42:36.764423 | orchestrator | 2026-04-11 04:42:36.764435 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-11 04:42:36.764446 | orchestrator | Saturday 11 April 2026 04:42:36 +0000 (0:00:04.792) 0:04:41.582 ******** 2026-04-11 04:42:36.764458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:42:36.764476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.112773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.112869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.112882 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:39.112909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:42:39.112919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.112929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.112974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.112984 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:39.112993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:42:39.113006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.113015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.113023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 04:42:39.113032 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:39.113040 | orchestrator | 2026-04-11 04:42:39.113049 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-11 04:42:39.113064 | orchestrator | Saturday 11 April 2026 04:42:38 +0000 (0:00:02.104) 0:04:43.686 ******** 2026-04-11 04:42:39.113074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:39.113085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:39.113095 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:39.113103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:39.113118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:54.802927 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:54.803045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:54.803066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:42:54.803080 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:54.803091 | orchestrator | 2026-04-11 04:42:54.803104 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-11 04:42:54.803116 | orchestrator | Saturday 11 April 2026 04:42:40 +0000 (0:00:01.782) 0:04:45.469 ******** 2026-04-11 04:42:54.803127 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:42:54.803139 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:42:54.803150 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:42:54.803161 | orchestrator | 2026-04-11 04:42:54.803172 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-11 04:42:54.803183 | orchestrator | Saturday 11 April 2026 04:42:42 +0000 (0:00:02.179) 0:04:47.648 ******** 2026-04-11 04:42:54.803194 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:42:54.803205 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:42:54.803215 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:42:54.803226 | orchestrator | 2026-04-11 04:42:54.803237 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-11 04:42:54.803265 | orchestrator | Saturday 11 April 2026 04:42:45 +0000 (0:00:02.998) 0:04:50.646 ******** 2026-04-11 04:42:54.803276 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:42:54.803288 | orchestrator | 2026-04-11 04:42:54.803299 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-11 04:42:54.803310 | orchestrator | Saturday 11 April 2026 04:42:48 +0000 (0:00:02.574) 0:04:53.220 ******** 2026-04-11 04:42:54.803321 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 04:42:54.803332 | orchestrator | 2026-04-11 04:42:54.803343 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-11 04:42:54.803354 | orchestrator | Saturday 11 April 2026 04:42:52 +0000 (0:00:04.314) 0:04:57.535 ******** 2026-04-11 04:42:54.803370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:42:54.803433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 04:42:54.803448 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:54.803469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:42:54.803492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 04:42:54.803505 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:54.803528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:42:58.674280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 04:42:58.674397 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:42:58.674415 | orchestrator | 2026-04-11 04:42:58.674428 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-11 04:42:58.674464 | orchestrator | Saturday 11 April 2026 04:42:55 +0000 (0:00:03.535) 0:05:01.070 ******** 2026-04-11 04:42:58.674480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:42:58.674520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 04:42:58.674534 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:42:58.674575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:42:58.674590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 04:42:58.674610 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:42:58.674622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:42:58.674643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-11 04:43:15.661854 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:15.661932 | orchestrator | 2026-04-11 04:43:15.661938 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-11 04:43:15.661943 | orchestrator | Saturday 11 April 2026 04:42:59 +0000 (0:00:03.907) 0:05:04.977 ******** 2026-04-11 04:43:15.661959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 04:43:15.661982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 04:43:15.661987 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:15.661991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 04:43:15.661995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 04:43:15.661999 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:15.662003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 04:43:15.662007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-11 04:43:15.662011 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:15.662051 | orchestrator | 2026-04-11 04:43:15.662056 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-11 04:43:15.662060 | orchestrator | Saturday 11 April 2026 04:43:03 +0000 (0:00:03.596) 0:05:08.574 ******** 2026-04-11 04:43:15.662064 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:43:15.662087 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:43:15.662091 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:43:15.662095 | orchestrator | 2026-04-11 04:43:15.662099 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-11 04:43:15.662103 | orchestrator | Saturday 11 April 2026 04:43:06 +0000 (0:00:03.071) 0:05:11.645 ******** 2026-04-11 04:43:15.662106 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:15.662117 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:15.662126 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:15.662130 | orchestrator | 2026-04-11 04:43:15.662134 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-11 04:43:15.662137 | orchestrator | Saturday 11 April 2026 04:43:09 +0000 (0:00:02.691) 0:05:14.337 ******** 2026-04-11 04:43:15.662141 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:15.662145 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:15.662149 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:15.662152 | orchestrator | 2026-04-11 04:43:15.662159 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-11 04:43:15.662163 | orchestrator | Saturday 11 April 2026 04:43:10 +0000 (0:00:01.391) 0:05:15.728 ******** 2026-04-11 04:43:15.662167 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:43:15.662171 | orchestrator | 2026-04-11 04:43:15.662174 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-11 04:43:15.662178 | orchestrator | Saturday 11 April 2026 04:43:12 +0000 (0:00:01.953) 0:05:17.682 ******** 2026-04-11 04:43:15.662183 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 04:43:15.662189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 04:43:15.662193 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 04:43:15.662197 | orchestrator | 2026-04-11 04:43:15.662201 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-11 04:43:15.662205 | orchestrator | Saturday 11 April 2026 04:43:15 +0000 (0:00:03.019) 0:05:20.702 ******** 2026-04-11 04:43:15.662212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 04:43:30.841285 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:30.841436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 04:43:30.841460 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:30.841474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 04:43:30.841486 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:30.841508 | orchestrator | 2026-04-11 04:43:30.841521 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-11 04:43:30.841534 | orchestrator | Saturday 11 April 2026 04:43:16 +0000 (0:00:01.448) 0:05:22.150 ******** 2026-04-11 04:43:30.841547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 04:43:30.841560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 04:43:30.841571 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:30.841583 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:30.841594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-11 04:43:30.841606 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:30.841617 | orchestrator | 2026-04-11 04:43:30.841628 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-11 04:43:30.841639 | orchestrator | Saturday 11 April 2026 04:43:18 +0000 (0:00:01.802) 0:05:23.953 ******** 2026-04-11 04:43:30.841650 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:30.841661 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:30.841692 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:30.841703 | orchestrator | 2026-04-11 04:43:30.841745 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-11 04:43:30.841757 | orchestrator | Saturday 11 April 2026 04:43:20 +0000 (0:00:01.705) 0:05:25.658 ******** 2026-04-11 04:43:30.841768 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:30.841779 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:30.841790 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:30.841801 | orchestrator | 2026-04-11 04:43:30.841812 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-11 04:43:30.841825 | orchestrator | Saturday 11 April 2026 04:43:22 +0000 (0:00:02.174) 0:05:27.833 ******** 2026-04-11 04:43:30.841848 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:30.841861 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:30.841874 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:30.841886 | orchestrator | 2026-04-11 04:43:30.841898 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-11 04:43:30.841911 | orchestrator | Saturday 11 April 2026 04:43:24 +0000 (0:00:01.434) 0:05:29.268 ******** 2026-04-11 04:43:30.841923 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:43:30.841935 | orchestrator | 2026-04-11 04:43:30.841949 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-11 04:43:30.841961 | orchestrator | Saturday 11 April 2026 04:43:26 +0000 (0:00:02.326) 0:05:31.594 ******** 2026-04-11 04:43:30.842003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:43:30.842073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:30.842090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:43:30.842117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 04:43:30.842153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:30.967778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 04:43:30.967875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 04:43:30.967909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:30.967921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 04:43:30.967954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:30.967967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:30.967978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:30.967990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:30.968006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 04:43:30.968017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:30.968028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:30.968051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:31.090590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 04:43:31.090764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 04:43:31.090815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:31.090829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:31.090843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:31.090873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:31.090946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 04:43:31.090974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 04:43:31.090987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:31.090999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:31.091010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:31.091039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 04:43:31.428264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:31.428432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:43:31.428454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:31.428497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 04:43:31.428532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 04:43:31.428555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:31.428567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:31.428581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:31.428594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 04:43:31.428612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:31.428633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.239190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 04:43:34.239298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.239315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.239331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 04:43:34.239364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:34.239377 | orchestrator | 2026-04-11 04:43:34.239390 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-11 04:43:34.239402 | orchestrator | Saturday 11 April 2026 04:43:32 +0000 (0:00:06.182) 0:05:37.777 ******** 2026-04-11 04:43:34.239434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:43:34.239471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.239484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 04:43:34.239502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 04:43:34.239523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.390915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.391017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.391034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 04:43:34.391048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:34.391078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.391091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 04:43:34.391142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.391155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.391169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 04:43:34.391184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:43:34.391202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:34.391230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:43:34.495123 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:34.495220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.495239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.495269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 04:43:34.495306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 04:43:34.495338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-11 04:43:34.495351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-11 04:43:34.495364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.495382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.495412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.495424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.495443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 04:43:34.603179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.603276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:34.603294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.603344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.603358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 04:43:34.603369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 04:43:34.603405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:34.603426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:34.603446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.603489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 04:43:34.603516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:34.603537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-11 04:43:34.603570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:49.939514 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:49.939627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-11 04:43:49.939648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-11 04:43:49.939691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-11 04:43:49.939706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-11 04:43:49.939718 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:49.939778 | orchestrator | 2026-04-11 04:43:49.939791 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-11 04:43:49.939803 | orchestrator | Saturday 11 April 2026 04:43:35 +0000 (0:00:03.241) 0:05:41.019 ******** 2026-04-11 04:43:49.939816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:43:49.939830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:43:49.939843 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:43:49.939942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:43:49.939983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:43:49.939998 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:43:49.940015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:43:49.940034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:43:49.940070 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:43:49.940090 | orchestrator | 2026-04-11 04:43:49.940108 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-11 04:43:49.940121 | orchestrator | Saturday 11 April 2026 04:43:38 +0000 (0:00:02.794) 0:05:43.814 ******** 2026-04-11 04:43:49.940134 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:43:49.940148 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:43:49.940160 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:43:49.940173 | orchestrator | 2026-04-11 04:43:49.940186 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-11 04:43:49.940199 | orchestrator | Saturday 11 April 2026 04:43:40 +0000 (0:00:02.244) 0:05:46.058 ******** 2026-04-11 04:43:49.940212 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:43:49.940224 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:43:49.940236 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:43:49.940249 | orchestrator | 2026-04-11 04:43:49.940262 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-11 04:43:49.940274 | orchestrator | Saturday 11 April 2026 04:43:43 +0000 (0:00:02.933) 0:05:48.992 ******** 2026-04-11 04:43:49.940287 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:43:49.940300 | orchestrator | 2026-04-11 04:43:49.940313 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-11 04:43:49.940332 | orchestrator | Saturday 11 April 2026 04:43:46 +0000 (0:00:02.380) 0:05:51.373 ******** 2026-04-11 04:43:49.940348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 04:43:49.940363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 04:43:49.940390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 04:44:06.009619 | orchestrator | 2026-04-11 04:44:06.009705 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-11 04:44:06.009714 | orchestrator | Saturday 11 April 2026 04:43:51 +0000 (0:00:04.808) 0:05:56.182 ******** 2026-04-11 04:44:06.009777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 04:44:06.009787 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:06.009794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 04:44:06.009800 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:06.009805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 04:44:06.009827 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:06.009833 | orchestrator | 2026-04-11 04:44:06.009838 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-11 04:44:06.009843 | orchestrator | Saturday 11 April 2026 04:43:52 +0000 (0:00:01.932) 0:05:58.114 ******** 2026-04-11 04:44:06.009849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:44:06.009868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:44:06.009875 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:06.009880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:44:06.009885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:44:06.009890 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:06.009899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:44:06.009904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:44:06.009908 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:06.009913 | orchestrator | 2026-04-11 04:44:06.009917 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-11 04:44:06.009922 | orchestrator | Saturday 11 April 2026 04:43:54 +0000 (0:00:01.614) 0:05:59.729 ******** 2026-04-11 04:44:06.009927 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:44:06.009932 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:44:06.009937 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:44:06.009941 | orchestrator | 2026-04-11 04:44:06.009946 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-11 04:44:06.009951 | orchestrator | Saturday 11 April 2026 04:43:56 +0000 (0:00:02.202) 0:06:01.932 ******** 2026-04-11 04:44:06.009955 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:44:06.009960 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:44:06.009964 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:44:06.009969 | orchestrator | 2026-04-11 04:44:06.009973 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-11 04:44:06.009978 | orchestrator | Saturday 11 April 2026 04:43:59 +0000 (0:00:02.984) 0:06:04.916 ******** 2026-04-11 04:44:06.009982 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:44:06.009991 | orchestrator | 2026-04-11 04:44:06.009996 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-11 04:44:06.010001 | orchestrator | Saturday 11 April 2026 04:44:02 +0000 (0:00:02.453) 0:06:07.369 ******** 2026-04-11 04:44:06.010006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:44:06.010053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:44:09.151868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:44:09.152000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:44:09.152054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:44:09.152075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:44:09.152118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:44:09.152145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:44:09.152163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:44:09.152193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:44:09.152210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:44:09.152228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:44:09.152246 | orchestrator | 2026-04-11 04:44:09.152264 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-11 04:44:09.152292 | orchestrator | Saturday 11 April 2026 04:44:09 +0000 (0:00:06.947) 0:06:14.317 ******** 2026-04-11 04:44:10.194340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:44:10.194452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:44:10.194493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:44:10.194507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:44:10.194520 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:10.194554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:44:10.194575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:44:10.194595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:44:10.194626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:44:10.194638 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:10.194661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:44:10.194689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:44:30.464558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 04:44:30.464682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 04:44:30.464702 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:30.464717 | orchestrator | 2026-04-11 04:44:30.464730 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-11 04:44:30.464742 | orchestrator | Saturday 11 April 2026 04:44:11 +0000 (0:00:02.151) 0:06:16.469 ******** 2026-04-11 04:44:30.464828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.464843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.464858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.464871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.464881 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:30.464892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.464903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.464915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.464927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.464938 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:30.464990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.465024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.465036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.465048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:44:30.465059 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:30.465070 | orchestrator | 2026-04-11 04:44:30.465081 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-11 04:44:30.465093 | orchestrator | Saturday 11 April 2026 04:44:13 +0000 (0:00:01.966) 0:06:18.435 ******** 2026-04-11 04:44:30.465106 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:44:30.465119 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:44:30.465131 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:44:30.465144 | orchestrator | 2026-04-11 04:44:30.465155 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-11 04:44:30.465166 | orchestrator | Saturday 11 April 2026 04:44:15 +0000 (0:00:02.255) 0:06:20.691 ******** 2026-04-11 04:44:30.465178 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:44:30.465188 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:44:30.465199 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:44:30.465210 | orchestrator | 2026-04-11 04:44:30.465220 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-11 04:44:30.465231 | orchestrator | Saturday 11 April 2026 04:44:18 +0000 (0:00:03.409) 0:06:24.101 ******** 2026-04-11 04:44:30.465242 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:44:30.465253 | orchestrator | 2026-04-11 04:44:30.465264 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-11 04:44:30.465274 | orchestrator | Saturday 11 April 2026 04:44:21 +0000 (0:00:02.583) 0:06:26.684 ******** 2026-04-11 04:44:30.465286 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-11 04:44:30.465298 | orchestrator | 2026-04-11 04:44:30.465310 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-11 04:44:30.465321 | orchestrator | Saturday 11 April 2026 04:44:23 +0000 (0:00:02.311) 0:06:28.996 ******** 2026-04-11 04:44:30.465334 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 04:44:30.465350 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 04:44:30.465372 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-11 04:44:30.465383 | orchestrator | 2026-04-11 04:44:30.465394 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-11 04:44:30.465406 | orchestrator | Saturday 11 April 2026 04:44:29 +0000 (0:00:05.419) 0:06:34.416 ******** 2026-04-11 04:44:30.465423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:30.465444 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:54.881375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:54.881493 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:54.881512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:54.881525 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:54.881536 | orchestrator | 2026-04-11 04:44:54.881548 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-11 04:44:54.881560 | orchestrator | Saturday 11 April 2026 04:44:31 +0000 (0:00:02.599) 0:06:37.016 ******** 2026-04-11 04:44:54.881572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 04:44:54.881587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 04:44:54.881600 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:54.881611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 04:44:54.881623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 04:44:54.881658 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:54.881671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 04:44:54.881682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-11 04:44:54.881693 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:54.881704 | orchestrator | 2026-04-11 04:44:54.881716 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 04:44:54.881727 | orchestrator | Saturday 11 April 2026 04:44:34 +0000 (0:00:02.521) 0:06:39.538 ******** 2026-04-11 04:44:54.881738 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:44:54.881749 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:44:54.881816 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:44:54.881828 | orchestrator | 2026-04-11 04:44:54.881840 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 04:44:54.881851 | orchestrator | Saturday 11 April 2026 04:44:37 +0000 (0:00:03.429) 0:06:42.967 ******** 2026-04-11 04:44:54.881861 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:44:54.881872 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:44:54.881882 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:44:54.881893 | orchestrator | 2026-04-11 04:44:54.881904 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-11 04:44:54.881917 | orchestrator | Saturday 11 April 2026 04:44:41 +0000 (0:00:04.054) 0:06:47.022 ******** 2026-04-11 04:44:54.881930 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-11 04:44:54.881943 | orchestrator | 2026-04-11 04:44:54.881970 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-11 04:44:54.881984 | orchestrator | Saturday 11 April 2026 04:44:43 +0000 (0:00:01.917) 0:06:48.940 ******** 2026-04-11 04:44:54.882079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:54.882095 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:54.882109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:54.882122 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:54.882135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:54.882158 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:54.882172 | orchestrator | 2026-04-11 04:44:54.882184 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-11 04:44:54.882197 | orchestrator | Saturday 11 April 2026 04:44:46 +0000 (0:00:02.327) 0:06:51.268 ******** 2026-04-11 04:44:54.882210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:54.882223 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:54.882236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:54.882249 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:54.882262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-11 04:44:54.882275 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:54.882287 | orchestrator | 2026-04-11 04:44:54.882298 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-11 04:44:54.882309 | orchestrator | Saturday 11 April 2026 04:44:48 +0000 (0:00:02.500) 0:06:53.768 ******** 2026-04-11 04:44:54.882320 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:44:54.882331 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:44:54.882342 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:44:54.882353 | orchestrator | 2026-04-11 04:44:54.882363 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 04:44:54.882380 | orchestrator | Saturday 11 April 2026 04:44:51 +0000 (0:00:02.812) 0:06:56.581 ******** 2026-04-11 04:44:54.882391 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:44:54.882402 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:44:54.882412 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:44:54.882423 | orchestrator | 2026-04-11 04:44:54.882434 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 04:44:54.882445 | orchestrator | Saturday 11 April 2026 04:44:54 +0000 (0:00:03.459) 0:07:00.041 ******** 2026-04-11 04:45:21.787664 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:45:21.787849 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:45:21.787869 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:45:21.787881 | orchestrator | 2026-04-11 04:45:21.787894 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-11 04:45:21.787906 | orchestrator | Saturday 11 April 2026 04:44:58 +0000 (0:00:04.065) 0:07:04.106 ******** 2026-04-11 04:45:21.787918 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-11 04:45:21.787930 | orchestrator | 2026-04-11 04:45:21.787941 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-11 04:45:21.787980 | orchestrator | Saturday 11 April 2026 04:45:00 +0000 (0:00:01.689) 0:07:05.796 ******** 2026-04-11 04:45:21.787995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 04:45:21.788010 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:21.788023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 04:45:21.788034 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:21.788045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 04:45:21.788056 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:21.788067 | orchestrator | 2026-04-11 04:45:21.788078 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-11 04:45:21.788091 | orchestrator | Saturday 11 April 2026 04:45:03 +0000 (0:00:02.494) 0:07:08.291 ******** 2026-04-11 04:45:21.788102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 04:45:21.788113 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:21.788125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 04:45:21.788136 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:21.788180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-11 04:45:21.788213 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:21.788231 | orchestrator | 2026-04-11 04:45:21.788251 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-11 04:45:21.788272 | orchestrator | Saturday 11 April 2026 04:45:05 +0000 (0:00:02.500) 0:07:10.792 ******** 2026-04-11 04:45:21.788289 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:21.788309 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:21.788322 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:21.788333 | orchestrator | 2026-04-11 04:45:21.788344 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-11 04:45:21.788355 | orchestrator | Saturday 11 April 2026 04:45:07 +0000 (0:00:02.376) 0:07:13.168 ******** 2026-04-11 04:45:21.788366 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:45:21.788377 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:45:21.788387 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:45:21.788398 | orchestrator | 2026-04-11 04:45:21.788409 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-11 04:45:21.788419 | orchestrator | Saturday 11 April 2026 04:45:11 +0000 (0:00:03.820) 0:07:16.989 ******** 2026-04-11 04:45:21.788430 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:45:21.788440 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:45:21.788451 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:45:21.788461 | orchestrator | 2026-04-11 04:45:21.788472 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-11 04:45:21.788483 | orchestrator | Saturday 11 April 2026 04:45:16 +0000 (0:00:04.245) 0:07:21.234 ******** 2026-04-11 04:45:21.788499 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:45:21.788521 | orchestrator | 2026-04-11 04:45:21.788550 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-11 04:45:21.788567 | orchestrator | Saturday 11 April 2026 04:45:18 +0000 (0:00:02.131) 0:07:23.366 ******** 2026-04-11 04:45:21.788586 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 04:45:21.788607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 04:45:21.788625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 04:45:21.788681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 04:45:23.753089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:45:23.753178 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 04:45:23.753189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 04:45:23.753198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 04:45:23.753206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 04:45:23.753245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:45:23.753268 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 04:45:23.753276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 04:45:23.753283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 04:45:23.753290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 04:45:23.753298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:45:23.753310 | orchestrator | 2026-04-11 04:45:23.753319 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-11 04:45:23.753326 | orchestrator | Saturday 11 April 2026 04:45:23 +0000 (0:00:05.136) 0:07:28.503 ******** 2026-04-11 04:45:23.753342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 04:45:24.029221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 04:45:24.029334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 04:45:24.029351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 04:45:24.029364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:45:24.029401 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:24.029416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 04:45:24.029431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 04:45:24.029462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 04:45:24.029474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 04:45:24.029486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:45:24.029497 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:24.029553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 04:45:24.029572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 04:45:24.029591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 04:45:40.475224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 04:45:40.475345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 04:45:40.475362 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:40.475377 | orchestrator | 2026-04-11 04:45:40.475390 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-11 04:45:40.475402 | orchestrator | Saturday 11 April 2026 04:45:25 +0000 (0:00:01.827) 0:07:30.330 ******** 2026-04-11 04:45:40.475414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 04:45:40.475450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 04:45:40.475463 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:40.475475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 04:45:40.475486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 04:45:40.475497 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:40.475507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 04:45:40.475518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-11 04:45:40.475529 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:40.475540 | orchestrator | 2026-04-11 04:45:40.475551 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-11 04:45:40.475562 | orchestrator | Saturday 11 April 2026 04:45:26 +0000 (0:00:01.766) 0:07:32.096 ******** 2026-04-11 04:45:40.475572 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:45:40.475584 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:45:40.475595 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:45:40.475605 | orchestrator | 2026-04-11 04:45:40.475630 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-11 04:45:40.475641 | orchestrator | Saturday 11 April 2026 04:45:29 +0000 (0:00:02.642) 0:07:34.739 ******** 2026-04-11 04:45:40.475671 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:45:40.475694 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:45:40.475705 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:45:40.475716 | orchestrator | 2026-04-11 04:45:40.475726 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-11 04:45:40.475737 | orchestrator | Saturday 11 April 2026 04:45:32 +0000 (0:00:02.928) 0:07:37.667 ******** 2026-04-11 04:45:40.475748 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:45:40.475760 | orchestrator | 2026-04-11 04:45:40.475771 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-11 04:45:40.475823 | orchestrator | Saturday 11 April 2026 04:45:34 +0000 (0:00:02.119) 0:07:39.788 ******** 2026-04-11 04:45:40.475856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:45:40.475873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:45:40.475898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:45:40.475918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:45:40.475942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:45:43.796071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:45:43.796191 | orchestrator | 2026-04-11 04:45:43.796209 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-11 04:45:43.796222 | orchestrator | Saturday 11 April 2026 04:45:41 +0000 (0:00:07.018) 0:07:46.806 ******** 2026-04-11 04:45:43.796236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:45:43.796267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:45:43.796281 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:43.796314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:45:43.796348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:45:43.796361 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:43.796373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:45:43.796391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:45:43.796403 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:43.796421 | orchestrator | 2026-04-11 04:45:43.796433 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-11 04:45:43.796444 | orchestrator | Saturday 11 April 2026 04:45:43 +0000 (0:00:01.741) 0:07:48.548 ******** 2026-04-11 04:45:43.796457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:45:43.796477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 04:45:54.146396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 04:45:54.146534 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:54.146555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:45:54.146568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 04:45:54.146582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 04:45:54.146593 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:54.146604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:45:54.146615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 04:45:54.146628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-11 04:45:54.146639 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:54.146650 | orchestrator | 2026-04-11 04:45:54.146679 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-11 04:45:54.146692 | orchestrator | Saturday 11 April 2026 04:45:45 +0000 (0:00:02.092) 0:07:50.640 ******** 2026-04-11 04:45:54.146703 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:54.146733 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:54.146745 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:54.146768 | orchestrator | 2026-04-11 04:45:54.146780 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-11 04:45:54.146849 | orchestrator | Saturday 11 April 2026 04:45:47 +0000 (0:00:01.546) 0:07:52.186 ******** 2026-04-11 04:45:54.146862 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:54.146873 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:54.146909 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:45:54.146922 | orchestrator | 2026-04-11 04:45:54.146935 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-11 04:45:54.146947 | orchestrator | Saturday 11 April 2026 04:45:49 +0000 (0:00:02.281) 0:07:54.467 ******** 2026-04-11 04:45:54.146960 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:45:54.146973 | orchestrator | 2026-04-11 04:45:54.146986 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-11 04:45:54.147011 | orchestrator | Saturday 11 April 2026 04:45:51 +0000 (0:00:02.662) 0:07:57.130 ******** 2026-04-11 04:45:54.147061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 04:45:54.147081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 04:45:54.147096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:54.147109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:54.147131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 04:45:54.147154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 04:45:54.147169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 04:45:54.147192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:56.263174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:56.263284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 04:45:56.263320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 04:45:56.263356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 04:45:56.263368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:56.263380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:56.263410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 04:45:56.263423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:45:56.263442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 04:45:56.263462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:56.263474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:56.263486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 04:45:56.263507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:45:58.049574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 04:45:58.049710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.049729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:45:58.049742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.049754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 04:45:58.049785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 04:45:58.049876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.049890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.049901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 04:45:58.049914 | orchestrator | 2026-04-11 04:45:58.049928 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-11 04:45:58.049940 | orchestrator | Saturday 11 April 2026 04:45:57 +0000 (0:00:05.547) 0:08:02.677 ******** 2026-04-11 04:45:58.049952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 04:45:58.049966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 04:45:58.049988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.267770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.268027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 04:45:58.268068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:45:58.268094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 04:45:58.268108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.268141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.268177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 04:45:58.268196 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:45:58.268211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 04:45:58.268224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 04:45:58.268236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.268250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.268270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 04:45:58.268320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:45:58.417144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 04:45:58.417245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.417263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 04:45:58.417300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.417314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 04:45:58.417360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 04:45:58.417375 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:45:58.417390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.417402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:45:58.417414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 04:45:58.417427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:45:58.417447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-11 04:45:58.417474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:46:10.910971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 04:46:10.911086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 04:46:10.911104 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:10.911119 | orchestrator | 2026-04-11 04:46:10.911132 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-11 04:46:10.911144 | orchestrator | Saturday 11 April 2026 04:45:59 +0000 (0:00:02.062) 0:08:04.740 ******** 2026-04-11 04:46:10.911157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 04:46:10.911171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 04:46:10.911210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:46:10.911222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:46:10.911235 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:10.911246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 04:46:10.911258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 04:46:10.911284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:46:10.911313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:46:10.911325 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:10.911336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 04:46:10.911347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-11 04:46:10.911358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:46:10.911370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-11 04:46:10.911390 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:10.911401 | orchestrator | 2026-04-11 04:46:10.911413 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-11 04:46:10.911424 | orchestrator | Saturday 11 April 2026 04:46:02 +0000 (0:00:02.493) 0:08:07.233 ******** 2026-04-11 04:46:10.911435 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:10.911446 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:10.911457 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:10.911468 | orchestrator | 2026-04-11 04:46:10.911479 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-11 04:46:10.911490 | orchestrator | Saturday 11 April 2026 04:46:03 +0000 (0:00:01.497) 0:08:08.731 ******** 2026-04-11 04:46:10.911501 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:10.911512 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:10.911523 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:10.911533 | orchestrator | 2026-04-11 04:46:10.911544 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-11 04:46:10.911555 | orchestrator | Saturday 11 April 2026 04:46:05 +0000 (0:00:02.360) 0:08:11.092 ******** 2026-04-11 04:46:10.911566 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:46:10.911576 | orchestrator | 2026-04-11 04:46:10.911587 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-11 04:46:10.911598 | orchestrator | Saturday 11 April 2026 04:46:08 +0000 (0:00:02.658) 0:08:13.751 ******** 2026-04-11 04:46:10.911610 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 04:46:10.911641 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 04:46:26.404755 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 04:46:26.404973 | orchestrator | 2026-04-11 04:46:26.404995 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-11 04:46:26.405008 | orchestrator | Saturday 11 April 2026 04:46:12 +0000 (0:00:03.619) 0:08:17.370 ******** 2026-04-11 04:46:26.405021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 04:46:26.405035 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:26.405064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 04:46:26.405077 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:26.405109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 04:46:26.405129 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:26.405147 | orchestrator | 2026-04-11 04:46:26.405166 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-11 04:46:26.405194 | orchestrator | Saturday 11 April 2026 04:46:13 +0000 (0:00:01.681) 0:08:19.051 ******** 2026-04-11 04:46:26.405215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 04:46:26.405234 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:26.405252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 04:46:26.405271 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:26.405293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-11 04:46:26.405311 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:26.405332 | orchestrator | 2026-04-11 04:46:26.405351 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-11 04:46:26.405370 | orchestrator | Saturday 11 April 2026 04:46:15 +0000 (0:00:01.852) 0:08:20.904 ******** 2026-04-11 04:46:26.405390 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:26.405431 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:26.405445 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:26.405469 | orchestrator | 2026-04-11 04:46:26.405482 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-11 04:46:26.405495 | orchestrator | Saturday 11 April 2026 04:46:17 +0000 (0:00:01.677) 0:08:22.581 ******** 2026-04-11 04:46:26.405507 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:26.405519 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:26.405532 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:26.405545 | orchestrator | 2026-04-11 04:46:26.405557 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-11 04:46:26.405570 | orchestrator | Saturday 11 April 2026 04:46:19 +0000 (0:00:02.344) 0:08:24.926 ******** 2026-04-11 04:46:26.405582 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:46:26.405595 | orchestrator | 2026-04-11 04:46:26.405607 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-11 04:46:26.405620 | orchestrator | Saturday 11 April 2026 04:46:22 +0000 (0:00:02.673) 0:08:27.599 ******** 2026-04-11 04:46:26.405634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 04:46:26.405648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 04:46:26.405714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 04:46:31.558482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 04:46:31.558599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 04:46:31.558633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 04:46:31.558669 | orchestrator | 2026-04-11 04:46:31.558683 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-11 04:46:31.558695 | orchestrator | Saturday 11 April 2026 04:46:30 +0000 (0:00:08.148) 0:08:35.748 ******** 2026-04-11 04:46:31.558727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 04:46:31.558741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 04:46:31.558753 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:31.558800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 04:46:31.558862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 04:46:31.558875 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:31.558897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 04:46:51.864950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 04:46:51.865074 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:51.865092 | orchestrator | 2026-04-11 04:46:51.865105 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-11 04:46:51.865117 | orchestrator | Saturday 11 April 2026 04:46:32 +0000 (0:00:02.070) 0:08:37.819 ******** 2026-04-11 04:46:51.865129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 04:46:51.865184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 04:46:51.865198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:46:51.865210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:46:51.865222 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:51.865233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 04:46:51.865244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 04:46:51.865255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:46:51.865266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:46:51.865277 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:51.865288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 04:46:51.865299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-11 04:46:51.865329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:46:51.865342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-11 04:46:51.865353 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:51.865364 | orchestrator | 2026-04-11 04:46:51.865375 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-11 04:46:51.865386 | orchestrator | Saturday 11 April 2026 04:46:34 +0000 (0:00:02.262) 0:08:40.081 ******** 2026-04-11 04:46:51.865405 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:46:51.865417 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:46:51.865430 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:46:51.865442 | orchestrator | 2026-04-11 04:46:51.865454 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-11 04:46:51.865467 | orchestrator | Saturday 11 April 2026 04:46:37 +0000 (0:00:02.284) 0:08:42.366 ******** 2026-04-11 04:46:51.865479 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:46:51.865492 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:46:51.865504 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:46:51.865516 | orchestrator | 2026-04-11 04:46:51.865529 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-11 04:46:51.865541 | orchestrator | Saturday 11 April 2026 04:46:40 +0000 (0:00:02.967) 0:08:45.333 ******** 2026-04-11 04:46:51.865553 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:51.865566 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:51.865578 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:51.865591 | orchestrator | 2026-04-11 04:46:51.865604 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-11 04:46:51.865617 | orchestrator | Saturday 11 April 2026 04:46:41 +0000 (0:00:01.733) 0:08:47.067 ******** 2026-04-11 04:46:51.865629 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:51.865642 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:51.865654 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:51.865667 | orchestrator | 2026-04-11 04:46:51.865686 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-11 04:46:51.865698 | orchestrator | Saturday 11 April 2026 04:46:43 +0000 (0:00:01.387) 0:08:48.455 ******** 2026-04-11 04:46:51.865712 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:51.865724 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:51.865736 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:51.865749 | orchestrator | 2026-04-11 04:46:51.865762 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-11 04:46:51.865774 | orchestrator | Saturday 11 April 2026 04:46:44 +0000 (0:00:01.394) 0:08:49.849 ******** 2026-04-11 04:46:51.865787 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:51.865799 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:51.865809 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:51.865851 | orchestrator | 2026-04-11 04:46:51.865869 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-11 04:46:51.865885 | orchestrator | Saturday 11 April 2026 04:46:46 +0000 (0:00:01.374) 0:08:51.223 ******** 2026-04-11 04:46:51.865898 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:51.865910 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:46:51.865920 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:46:51.865931 | orchestrator | 2026-04-11 04:46:51.865942 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-11 04:46:51.865952 | orchestrator | Saturday 11 April 2026 04:46:47 +0000 (0:00:01.700) 0:08:52.924 ******** 2026-04-11 04:46:51.865963 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:46:51.865975 | orchestrator | 2026-04-11 04:46:51.865985 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-11 04:46:51.865996 | orchestrator | Saturday 11 April 2026 04:46:50 +0000 (0:00:02.516) 0:08:55.441 ******** 2026-04-11 04:46:51.866008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-11 04:46:51.866100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-11 04:46:56.707205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-11 04:46:56.707316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:46:56.707349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:46:56.707362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-11 04:46:56.707375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:46:56.707388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:46:56.707444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-11 04:46:56.707458 | orchestrator | 2026-04-11 04:46:56.707471 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-11 04:46:56.707483 | orchestrator | Saturday 11 April 2026 04:46:54 +0000 (0:00:04.363) 0:08:59.805 ******** 2026-04-11 04:46:56.707495 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 04:46:56.707507 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:46:56.707519 | orchestrator | } 2026-04-11 04:46:56.707530 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 04:46:56.707541 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:46:56.707552 | orchestrator | } 2026-04-11 04:46:56.707563 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 04:46:56.707573 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:46:56.707584 | orchestrator | } 2026-04-11 04:46:56.707595 | orchestrator | 2026-04-11 04:46:56.707606 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 04:46:56.707617 | orchestrator | Saturday 11 April 2026 04:46:56 +0000 (0:00:01.488) 0:09:01.294 ******** 2026-04-11 04:46:56.707629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-11 04:46:56.707646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:46:56.707659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:46:56.707670 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:46:56.707690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-11 04:46:56.707702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:46:56.707724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:49:00.572518 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:00.572641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-11 04:49:00.572677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-11 04:49:00.572691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-11 04:49:00.572704 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:00.572716 | orchestrator | 2026-04-11 04:49:00.572751 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-11 04:49:00.572764 | orchestrator | Saturday 11 April 2026 04:46:58 +0000 (0:00:02.637) 0:09:03.931 ******** 2026-04-11 04:49:00.572775 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:00.572786 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:00.572797 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:00.572808 | orchestrator | 2026-04-11 04:49:00.572819 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-11 04:49:00.572830 | orchestrator | Saturday 11 April 2026 04:47:00 +0000 (0:00:01.752) 0:09:05.684 ******** 2026-04-11 04:49:00.572840 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:00.572851 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:00.572862 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:00.572873 | orchestrator | 2026-04-11 04:49:00.572884 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-11 04:49:00.572894 | orchestrator | Saturday 11 April 2026 04:47:02 +0000 (0:00:01.507) 0:09:07.192 ******** 2026-04-11 04:49:00.572999 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:49:00.573014 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:49:00.573025 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:49:00.573040 | orchestrator | 2026-04-11 04:49:00.573054 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-11 04:49:00.573068 | orchestrator | Saturday 11 April 2026 04:47:09 +0000 (0:00:07.173) 0:09:14.365 ******** 2026-04-11 04:49:00.573081 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:49:00.573094 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:49:00.573107 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:49:00.573119 | orchestrator | 2026-04-11 04:49:00.573132 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-11 04:49:00.573145 | orchestrator | Saturday 11 April 2026 04:47:16 +0000 (0:00:07.054) 0:09:21.419 ******** 2026-04-11 04:49:00.573158 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:49:00.573170 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:49:00.573183 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:49:00.573196 | orchestrator | 2026-04-11 04:49:00.573209 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-11 04:49:00.573222 | orchestrator | Saturday 11 April 2026 04:47:23 +0000 (0:00:07.017) 0:09:28.437 ******** 2026-04-11 04:49:00.573235 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:49:00.573248 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:49:00.573260 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:49:00.573273 | orchestrator | 2026-04-11 04:49:00.573286 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-11 04:49:00.573299 | orchestrator | Saturday 11 April 2026 04:47:30 +0000 (0:00:07.650) 0:09:36.087 ******** 2026-04-11 04:49:00.573313 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:00.573325 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:00.573338 | orchestrator | 2026-04-11 04:49:00.573351 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-11 04:49:00.573363 | orchestrator | Saturday 11 April 2026 04:47:34 +0000 (0:00:03.715) 0:09:39.802 ******** 2026-04-11 04:49:00.573377 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:49:00.573389 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:49:00.573403 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:49:00.573414 | orchestrator | 2026-04-11 04:49:00.573442 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-11 04:49:00.573454 | orchestrator | Saturday 11 April 2026 04:47:47 +0000 (0:00:13.067) 0:09:52.869 ******** 2026-04-11 04:49:00.573465 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:00.573476 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:00.573487 | orchestrator | 2026-04-11 04:49:00.573498 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-11 04:49:00.573509 | orchestrator | Saturday 11 April 2026 04:47:52 +0000 (0:00:04.719) 0:09:57.589 ******** 2026-04-11 04:49:00.573520 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:49:00.573540 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:49:00.573551 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:49:00.573562 | orchestrator | 2026-04-11 04:49:00.573573 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-11 04:49:00.573583 | orchestrator | Saturday 11 April 2026 04:47:59 +0000 (0:00:07.307) 0:10:04.896 ******** 2026-04-11 04:49:00.573594 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:00.573605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:00.573616 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:49:00.573627 | orchestrator | 2026-04-11 04:49:00.573637 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-11 04:49:00.573648 | orchestrator | Saturday 11 April 2026 04:48:06 +0000 (0:00:06.823) 0:10:11.720 ******** 2026-04-11 04:49:00.573659 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:00.573670 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:00.573680 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:49:00.573691 | orchestrator | 2026-04-11 04:49:00.573702 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-11 04:49:00.573713 | orchestrator | Saturday 11 April 2026 04:48:13 +0000 (0:00:06.857) 0:10:18.577 ******** 2026-04-11 04:49:00.573729 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:00.573741 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:00.573752 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:49:00.573762 | orchestrator | 2026-04-11 04:49:00.573773 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-11 04:49:00.573784 | orchestrator | Saturday 11 April 2026 04:48:20 +0000 (0:00:06.820) 0:10:25.398 ******** 2026-04-11 04:49:00.573795 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:00.573806 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:00.573816 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:49:00.573827 | orchestrator | 2026-04-11 04:49:00.573838 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-04-11 04:49:00.573849 | orchestrator | Saturday 11 April 2026 04:48:27 +0000 (0:00:07.124) 0:10:32.523 ******** 2026-04-11 04:49:00.573859 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:00.573870 | orchestrator | 2026-04-11 04:49:00.573881 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-11 04:49:00.573892 | orchestrator | Saturday 11 April 2026 04:48:31 +0000 (0:00:03.682) 0:10:36.205 ******** 2026-04-11 04:49:00.573927 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:00.573947 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:00.573967 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:49:00.573985 | orchestrator | 2026-04-11 04:49:00.574001 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-04-11 04:49:00.574012 | orchestrator | Saturday 11 April 2026 04:48:43 +0000 (0:00:12.957) 0:10:49.162 ******** 2026-04-11 04:49:00.574180 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:00.574193 | orchestrator | 2026-04-11 04:49:00.574204 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-11 04:49:00.574215 | orchestrator | Saturday 11 April 2026 04:48:48 +0000 (0:00:04.652) 0:10:53.815 ******** 2026-04-11 04:49:00.574226 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:00.574237 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:00.574248 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:49:00.574259 | orchestrator | 2026-04-11 04:49:00.574270 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-11 04:49:00.574280 | orchestrator | Saturday 11 April 2026 04:48:55 +0000 (0:00:06.771) 0:11:00.586 ******** 2026-04-11 04:49:00.574291 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:00.574302 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:00.574313 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:00.574324 | orchestrator | 2026-04-11 04:49:00.574335 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-11 04:49:00.574346 | orchestrator | Saturday 11 April 2026 04:48:57 +0000 (0:00:02.380) 0:11:02.967 ******** 2026-04-11 04:49:00.574367 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:00.574378 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:00.574389 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:00.574400 | orchestrator | 2026-04-11 04:49:00.574410 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:49:00.574422 | orchestrator | testbed-node-0 : ok=129  changed=30  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-11 04:49:00.574435 | orchestrator | testbed-node-1 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-11 04:49:00.574446 | orchestrator | testbed-node-2 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-11 04:49:00.574457 | orchestrator | 2026-04-11 04:49:00.574468 | orchestrator | 2026-04-11 04:49:00.574479 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:49:00.574490 | orchestrator | Saturday 11 April 2026 04:49:00 +0000 (0:00:02.761) 0:11:05.729 ******** 2026-04-11 04:49:00.574501 | orchestrator | =============================================================================== 2026-04-11 04:49:00.574511 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.07s 2026-04-11 04:49:00.574522 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.96s 2026-04-11 04:49:00.574533 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.15s 2026-04-11 04:49:00.574555 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.65s 2026-04-11 04:49:01.373062 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.31s 2026-04-11 04:49:01.373165 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.17s 2026-04-11 04:49:01.373180 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.12s 2026-04-11 04:49:01.373192 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.05s 2026-04-11 04:49:01.373203 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.02s 2026-04-11 04:49:01.373214 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.02s 2026-04-11 04:49:01.373225 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.95s 2026-04-11 04:49:01.373236 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.86s 2026-04-11 04:49:01.373247 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.82s 2026-04-11 04:49:01.373258 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.82s 2026-04-11 04:49:01.373269 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.77s 2026-04-11 04:49:01.373280 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.18s 2026-04-11 04:49:01.373290 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.70s 2026-04-11 04:49:01.373301 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.55s 2026-04-11 04:49:01.373331 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.42s 2026-04-11 04:49:01.373342 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.14s 2026-04-11 04:49:01.560638 | orchestrator | + osism apply -a upgrade opensearch 2026-04-11 04:49:02.852590 | orchestrator | 2026-04-11 04:49:02 | INFO  | Prepare task for execution of opensearch. 2026-04-11 04:49:02.917061 | orchestrator | 2026-04-11 04:49:02 | INFO  | Task daf531d0-df48-4184-9813-8a78b3f67309 (opensearch) was prepared for execution. 2026-04-11 04:49:02.917141 | orchestrator | 2026-04-11 04:49:02 | INFO  | It takes a moment until task daf531d0-df48-4184-9813-8a78b3f67309 (opensearch) has been started and output is visible here. 2026-04-11 04:49:22.389200 | orchestrator | 2026-04-11 04:49:22.389321 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 04:49:22.389353 | orchestrator | 2026-04-11 04:49:22.389366 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 04:49:22.389378 | orchestrator | Saturday 11 April 2026 04:49:07 +0000 (0:00:01.765) 0:00:01.765 ******** 2026-04-11 04:49:22.389389 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:22.389401 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:22.389411 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:22.389422 | orchestrator | 2026-04-11 04:49:22.389433 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 04:49:22.389444 | orchestrator | Saturday 11 April 2026 04:49:09 +0000 (0:00:01.752) 0:00:03.518 ******** 2026-04-11 04:49:22.389456 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-11 04:49:22.389467 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-11 04:49:22.389478 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-11 04:49:22.389489 | orchestrator | 2026-04-11 04:49:22.389499 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-11 04:49:22.389510 | orchestrator | 2026-04-11 04:49:22.389521 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 04:49:22.389531 | orchestrator | Saturday 11 April 2026 04:49:12 +0000 (0:00:02.523) 0:00:06.042 ******** 2026-04-11 04:49:22.389543 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:49:22.389554 | orchestrator | 2026-04-11 04:49:22.389564 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-11 04:49:22.389575 | orchestrator | Saturday 11 April 2026 04:49:15 +0000 (0:00:03.628) 0:00:09.670 ******** 2026-04-11 04:49:22.389586 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 04:49:22.389596 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 04:49:22.389607 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-11 04:49:22.389617 | orchestrator | 2026-04-11 04:49:22.389628 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-11 04:49:22.389639 | orchestrator | Saturday 11 April 2026 04:49:19 +0000 (0:00:03.435) 0:00:13.106 ******** 2026-04-11 04:49:22.389653 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:22.389673 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:22.389742 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:22.389760 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:22.389776 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:22.389795 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:22.389817 | orchestrator | 2026-04-11 04:49:22.389829 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 04:49:22.389842 | orchestrator | Saturday 11 April 2026 04:49:21 +0000 (0:00:02.382) 0:00:15.488 ******** 2026-04-11 04:49:22.389854 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:49:22.389867 | orchestrator | 2026-04-11 04:49:22.389887 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-11 04:49:28.390390 | orchestrator | Saturday 11 April 2026 04:49:23 +0000 (0:00:01.824) 0:00:17.313 ******** 2026-04-11 04:49:28.390521 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:28.390549 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:28.390569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:28.390633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:28.390680 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:28.390701 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:28.390718 | orchestrator | 2026-04-11 04:49:28.390735 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-11 04:49:28.390750 | orchestrator | Saturday 11 April 2026 04:49:27 +0000 (0:00:04.330) 0:00:21.643 ******** 2026-04-11 04:49:28.390766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:49:28.390811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:49:30.763436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:49:30.763546 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:49:30.763564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:49:30.763602 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:30.763625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:49:30.763657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:49:30.763670 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:30.763682 | orchestrator | 2026-04-11 04:49:30.763694 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-11 04:49:30.763707 | orchestrator | Saturday 11 April 2026 04:49:29 +0000 (0:00:02.058) 0:00:23.702 ******** 2026-04-11 04:49:30.763719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:49:30.763731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:49:30.763751 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:49:30.763768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:49:30.763787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:49:34.271617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:49:34.271764 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:49:34.271795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:49:34.271849 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:49:34.271872 | orchestrator | 2026-04-11 04:49:34.271894 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-11 04:49:34.271916 | orchestrator | Saturday 11 April 2026 04:49:31 +0000 (0:00:02.092) 0:00:25.794 ******** 2026-04-11 04:49:34.271995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:34.272045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:34.272068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:34.272105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:34.272139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:34.272181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:47.180421 | orchestrator | 2026-04-11 04:49:47.180521 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-11 04:49:47.180532 | orchestrator | Saturday 11 April 2026 04:49:35 +0000 (0:00:03.411) 0:00:29.206 ******** 2026-04-11 04:49:47.180539 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:47.180569 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:47.180577 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:47.180584 | orchestrator | 2026-04-11 04:49:47.180593 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-11 04:49:47.180600 | orchestrator | Saturday 11 April 2026 04:49:38 +0000 (0:00:03.485) 0:00:32.692 ******** 2026-04-11 04:49:47.180606 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:49:47.180613 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:49:47.180621 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:49:47.180628 | orchestrator | 2026-04-11 04:49:47.180636 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-11 04:49:47.180644 | orchestrator | Saturday 11 April 2026 04:49:42 +0000 (0:00:03.227) 0:00:35.919 ******** 2026-04-11 04:49:47.180653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:47.180676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:47.180683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 04:49:47.180706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:47.180721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:47.180733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-11 04:49:47.180740 | orchestrator | 2026-04-11 04:49:47.180747 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-11 04:49:47.180754 | orchestrator | Saturday 11 April 2026 04:49:45 +0000 (0:00:03.394) 0:00:39.314 ******** 2026-04-11 04:49:47.180762 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 04:49:47.180771 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:49:47.180778 | orchestrator | } 2026-04-11 04:49:47.180784 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 04:49:47.180791 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:49:47.180798 | orchestrator | } 2026-04-11 04:49:47.180804 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 04:49:47.180810 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:49:47.180816 | orchestrator | } 2026-04-11 04:49:47.180823 | orchestrator | 2026-04-11 04:49:47.180830 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 04:49:47.180840 | orchestrator | Saturday 11 April 2026 04:49:46 +0000 (0:00:01.369) 0:00:40.684 ******** 2026-04-11 04:49:47.180852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:52:59.352884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:52:59.353010 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:52:59.353046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:52:59.353063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:52:59.353145 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:52:59.353179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 04:52:59.353193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-11 04:52:59.353205 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:52:59.353216 | orchestrator | 2026-04-11 04:52:59.353229 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 04:52:59.353241 | orchestrator | Saturday 11 April 2026 04:49:49 +0000 (0:00:02.466) 0:00:43.151 ******** 2026-04-11 04:52:59.353252 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:52:59.353263 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:52:59.353274 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:52:59.353285 | orchestrator | 2026-04-11 04:52:59.353296 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 04:52:59.353313 | orchestrator | Saturday 11 April 2026 04:49:50 +0000 (0:00:01.356) 0:00:44.507 ******** 2026-04-11 04:52:59.353324 | orchestrator | 2026-04-11 04:52:59.353335 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 04:52:59.353345 | orchestrator | Saturday 11 April 2026 04:49:51 +0000 (0:00:00.463) 0:00:44.970 ******** 2026-04-11 04:52:59.353356 | orchestrator | 2026-04-11 04:52:59.353367 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-11 04:52:59.353378 | orchestrator | Saturday 11 April 2026 04:49:51 +0000 (0:00:00.454) 0:00:45.425 ******** 2026-04-11 04:52:59.353388 | orchestrator | 2026-04-11 04:52:59.353399 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-11 04:52:59.353410 | orchestrator | Saturday 11 April 2026 04:49:52 +0000 (0:00:00.795) 0:00:46.221 ******** 2026-04-11 04:52:59.353429 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:52:59.353441 | orchestrator | 2026-04-11 04:52:59.353452 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-11 04:52:59.353463 | orchestrator | Saturday 11 April 2026 04:49:55 +0000 (0:00:03.425) 0:00:49.646 ******** 2026-04-11 04:52:59.353474 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:52:59.353484 | orchestrator | 2026-04-11 04:52:59.353495 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-11 04:52:59.353506 | orchestrator | Saturday 11 April 2026 04:50:00 +0000 (0:00:04.815) 0:00:54.461 ******** 2026-04-11 04:52:59.353517 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:52:59.353528 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:52:59.353539 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:52:59.353550 | orchestrator | 2026-04-11 04:52:59.353561 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-11 04:52:59.353571 | orchestrator | Saturday 11 April 2026 04:51:09 +0000 (0:01:09.278) 0:02:03.740 ******** 2026-04-11 04:52:59.353582 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:52:59.353593 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:52:59.353604 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:52:59.353615 | orchestrator | 2026-04-11 04:52:59.353625 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-11 04:52:59.353636 | orchestrator | Saturday 11 April 2026 04:52:47 +0000 (0:01:37.710) 0:03:41.451 ******** 2026-04-11 04:52:59.353648 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:52:59.353659 | orchestrator | 2026-04-11 04:52:59.353670 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-11 04:52:59.353681 | orchestrator | Saturday 11 April 2026 04:52:49 +0000 (0:00:01.881) 0:03:43.332 ******** 2026-04-11 04:52:59.353691 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:52:59.353702 | orchestrator | 2026-04-11 04:52:59.353713 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-11 04:52:59.353724 | orchestrator | Saturday 11 April 2026 04:52:52 +0000 (0:00:03.502) 0:03:46.835 ******** 2026-04-11 04:52:59.353734 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:52:59.353745 | orchestrator | 2026-04-11 04:52:59.353756 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-11 04:52:59.353767 | orchestrator | Saturday 11 April 2026 04:52:56 +0000 (0:00:03.155) 0:03:49.991 ******** 2026-04-11 04:52:59.353778 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:52:59.353788 | orchestrator | 2026-04-11 04:52:59.353800 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-11 04:52:59.353818 | orchestrator | Saturday 11 April 2026 04:52:59 +0000 (0:00:03.200) 0:03:53.191 ******** 2026-04-11 04:53:02.679545 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:53:02.679652 | orchestrator | 2026-04-11 04:53:02.679669 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-11 04:53:02.679682 | orchestrator | Saturday 11 April 2026 04:53:00 +0000 (0:00:01.234) 0:03:54.426 ******** 2026-04-11 04:53:02.679694 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:53:02.679705 | orchestrator | 2026-04-11 04:53:02.679778 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:53:02.679792 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 04:53:02.679805 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 04:53:02.679816 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 04:53:02.679827 | orchestrator | 2026-04-11 04:53:02.679838 | orchestrator | 2026-04-11 04:53:02.679849 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:53:02.679891 | orchestrator | Saturday 11 April 2026 04:53:02 +0000 (0:00:01.727) 0:03:56.154 ******** 2026-04-11 04:53:02.679902 | orchestrator | =============================================================================== 2026-04-11 04:53:02.679913 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 97.71s 2026-04-11 04:53:02.679924 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.28s 2026-04-11 04:53:02.679935 | orchestrator | opensearch : Perform a flush -------------------------------------------- 4.82s 2026-04-11 04:53:02.679960 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 4.33s 2026-04-11 04:53:02.679972 | orchestrator | opensearch : include_tasks ---------------------------------------------- 3.63s 2026-04-11 04:53:02.679983 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.50s 2026-04-11 04:53:02.679994 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.49s 2026-04-11 04:53:02.680005 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 3.44s 2026-04-11 04:53:02.680016 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.42s 2026-04-11 04:53:02.680027 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.41s 2026-04-11 04:53:02.680038 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.39s 2026-04-11 04:53:02.680049 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.23s 2026-04-11 04:53:02.680060 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.20s 2026-04-11 04:53:02.680071 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 3.16s 2026-04-11 04:53:02.680105 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.52s 2026-04-11 04:53:02.680119 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.47s 2026-04-11 04:53:02.680132 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.38s 2026-04-11 04:53:02.680146 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 2.09s 2026-04-11 04:53:02.680158 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 2.06s 2026-04-11 04:53:02.680173 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.88s 2026-04-11 04:53:02.865520 | orchestrator | + osism apply -a upgrade memcached 2026-04-11 04:53:04.144049 | orchestrator | 2026-04-11 04:53:04 | INFO  | Prepare task for execution of memcached. 2026-04-11 04:53:04.218873 | orchestrator | 2026-04-11 04:53:04 | INFO  | Task 7b4421d4-b7a2-4b24-871f-36a17491786e (memcached) was prepared for execution. 2026-04-11 04:53:04.219010 | orchestrator | 2026-04-11 04:53:04 | INFO  | It takes a moment until task 7b4421d4-b7a2-4b24-871f-36a17491786e (memcached) has been started and output is visible here. 2026-04-11 04:53:39.141098 | orchestrator | 2026-04-11 04:53:39.141229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 04:53:39.141242 | orchestrator | 2026-04-11 04:53:39.141249 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 04:53:39.141256 | orchestrator | Saturday 11 April 2026 04:53:09 +0000 (0:00:01.843) 0:00:01.843 ******** 2026-04-11 04:53:39.141262 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:53:39.141270 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:53:39.141276 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:53:39.141282 | orchestrator | 2026-04-11 04:53:39.141288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 04:53:39.141295 | orchestrator | Saturday 11 April 2026 04:53:11 +0000 (0:00:01.724) 0:00:03.568 ******** 2026-04-11 04:53:39.141302 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-11 04:53:39.141309 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-11 04:53:39.141315 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-11 04:53:39.141337 | orchestrator | 2026-04-11 04:53:39.141344 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-11 04:53:39.141350 | orchestrator | 2026-04-11 04:53:39.141356 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-11 04:53:39.141362 | orchestrator | Saturday 11 April 2026 04:53:15 +0000 (0:00:04.147) 0:00:07.716 ******** 2026-04-11 04:53:39.141369 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:53:39.141376 | orchestrator | 2026-04-11 04:53:39.141382 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-11 04:53:39.141388 | orchestrator | Saturday 11 April 2026 04:53:17 +0000 (0:00:02.387) 0:00:10.104 ******** 2026-04-11 04:53:39.141395 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-11 04:53:39.141401 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-11 04:53:39.141407 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-11 04:53:39.141413 | orchestrator | 2026-04-11 04:53:39.141419 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-11 04:53:39.141425 | orchestrator | Saturday 11 April 2026 04:53:19 +0000 (0:00:02.090) 0:00:12.195 ******** 2026-04-11 04:53:39.141431 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-11 04:53:39.141437 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-11 04:53:39.141443 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-11 04:53:39.141450 | orchestrator | 2026-04-11 04:53:39.141456 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-11 04:53:39.141462 | orchestrator | Saturday 11 April 2026 04:53:22 +0000 (0:00:02.644) 0:00:14.839 ******** 2026-04-11 04:53:39.141470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 04:53:39.141491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 04:53:39.141513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-11 04:53:39.141525 | orchestrator | 2026-04-11 04:53:39.141532 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-11 04:53:39.141538 | orchestrator | Saturday 11 April 2026 04:53:24 +0000 (0:00:02.588) 0:00:17.427 ******** 2026-04-11 04:53:39.141544 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 04:53:39.141551 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:53:39.141557 | orchestrator | } 2026-04-11 04:53:39.141563 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 04:53:39.141570 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:53:39.141576 | orchestrator | } 2026-04-11 04:53:39.141582 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 04:53:39.141588 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:53:39.141594 | orchestrator | } 2026-04-11 04:53:39.141600 | orchestrator | 2026-04-11 04:53:39.141606 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 04:53:39.141613 | orchestrator | Saturday 11 April 2026 04:53:26 +0000 (0:00:01.545) 0:00:18.973 ******** 2026-04-11 04:53:39.141619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 04:53:39.141626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 04:53:39.141633 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:53:39.141639 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:53:39.141649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-11 04:53:39.141656 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:53:39.141662 | orchestrator | 2026-04-11 04:53:39.141669 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-11 04:53:39.141675 | orchestrator | Saturday 11 April 2026 04:53:28 +0000 (0:00:02.105) 0:00:21.078 ******** 2026-04-11 04:53:39.141692 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:53:39.141713 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:53:39.141719 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:53:39.141725 | orchestrator | 2026-04-11 04:53:39.141731 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:53:39.141738 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 04:53:39.141746 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 04:53:39.141752 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 04:53:39.141758 | orchestrator | 2026-04-11 04:53:39.141764 | orchestrator | 2026-04-11 04:53:39.141771 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:53:39.141782 | orchestrator | Saturday 11 April 2026 04:53:39 +0000 (0:00:10.557) 0:00:31.636 ******** 2026-04-11 04:53:39.493888 | orchestrator | =============================================================================== 2026-04-11 04:53:39.493980 | orchestrator | memcached : Restart memcached container -------------------------------- 10.56s 2026-04-11 04:53:39.493992 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.15s 2026-04-11 04:53:39.494003 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.64s 2026-04-11 04:53:39.494012 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.59s 2026-04-11 04:53:39.494075 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.39s 2026-04-11 04:53:39.494084 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.11s 2026-04-11 04:53:39.494094 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.09s 2026-04-11 04:53:39.494104 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.73s 2026-04-11 04:53:39.494156 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.55s 2026-04-11 04:53:39.694700 | orchestrator | + osism apply -a upgrade redis 2026-04-11 04:53:41.015904 | orchestrator | 2026-04-11 04:53:41 | INFO  | Prepare task for execution of redis. 2026-04-11 04:53:41.079263 | orchestrator | 2026-04-11 04:53:41 | INFO  | Task c07319d3-3734-4cb6-93e2-a4247bc68311 (redis) was prepared for execution. 2026-04-11 04:53:41.079353 | orchestrator | 2026-04-11 04:53:41 | INFO  | It takes a moment until task c07319d3-3734-4cb6-93e2-a4247bc68311 (redis) has been started and output is visible here. 2026-04-11 04:53:57.604016 | orchestrator | 2026-04-11 04:53:57.604217 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 04:53:57.604241 | orchestrator | 2026-04-11 04:53:57.604255 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 04:53:57.604266 | orchestrator | Saturday 11 April 2026 04:53:46 +0000 (0:00:01.803) 0:00:01.803 ******** 2026-04-11 04:53:57.604278 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:53:57.604290 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:53:57.604301 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:53:57.604312 | orchestrator | 2026-04-11 04:53:57.604323 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 04:53:57.604334 | orchestrator | Saturday 11 April 2026 04:53:47 +0000 (0:00:01.661) 0:00:03.465 ******** 2026-04-11 04:53:57.604345 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-11 04:53:57.604357 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-11 04:53:57.604368 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-11 04:53:57.604379 | orchestrator | 2026-04-11 04:53:57.604390 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-11 04:53:57.604406 | orchestrator | 2026-04-11 04:53:57.604425 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-11 04:53:57.604469 | orchestrator | Saturday 11 April 2026 04:53:49 +0000 (0:00:01.696) 0:00:05.162 ******** 2026-04-11 04:53:57.604519 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:53:57.604540 | orchestrator | 2026-04-11 04:53:57.604619 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-11 04:53:57.604640 | orchestrator | Saturday 11 April 2026 04:53:52 +0000 (0:00:03.194) 0:00:08.357 ******** 2026-04-11 04:53:57.604676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604702 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604723 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604744 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604789 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604827 | orchestrator | 2026-04-11 04:53:57.604838 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-11 04:53:57.604850 | orchestrator | Saturday 11 April 2026 04:53:55 +0000 (0:00:02.754) 0:00:11.111 ******** 2026-04-11 04:53:57.604867 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604879 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:53:57.604921 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838230 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838375 | orchestrator | 2026-04-11 04:54:04.838398 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-11 04:54:04.838414 | orchestrator | Saturday 11 April 2026 04:53:58 +0000 (0:00:03.162) 0:00:14.274 ******** 2026-04-11 04:54:04.838444 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838459 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838473 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838488 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838503 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838549 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838560 | orchestrator | 2026-04-11 04:54:04.838568 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-11 04:54:04.838577 | orchestrator | Saturday 11 April 2026 04:54:02 +0000 (0:00:04.104) 0:00:18.378 ******** 2026-04-11 04:54:04.838591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:54:04.838650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-11 04:54:32.606745 | orchestrator | 2026-04-11 04:54:32.606868 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-11 04:54:32.606887 | orchestrator | Saturday 11 April 2026 04:54:05 +0000 (0:00:03.080) 0:00:21.458 ******** 2026-04-11 04:54:32.606900 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 04:54:32.606913 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:54:32.606924 | orchestrator | } 2026-04-11 04:54:32.606936 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 04:54:32.606947 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:54:32.606958 | orchestrator | } 2026-04-11 04:54:32.606968 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 04:54:32.606979 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:54:32.606990 | orchestrator | } 2026-04-11 04:54:32.607002 | orchestrator | 2026-04-11 04:54:32.607014 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 04:54:32.607024 | orchestrator | Saturday 11 April 2026 04:54:07 +0000 (0:00:01.402) 0:00:22.861 ******** 2026-04-11 04:54:32.607054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-11 04:54:32.607069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-11 04:54:32.607082 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:54:32.607093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-11 04:54:32.607125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-11 04:54:32.607137 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:54:32.607180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-11 04:54:32.607215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-11 04:54:32.607227 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:54:32.607238 | orchestrator | 2026-04-11 04:54:32.607249 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 04:54:32.607262 | orchestrator | Saturday 11 April 2026 04:54:09 +0000 (0:00:01.984) 0:00:24.846 ******** 2026-04-11 04:54:32.607274 | orchestrator | 2026-04-11 04:54:32.607294 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 04:54:32.607307 | orchestrator | Saturday 11 April 2026 04:54:09 +0000 (0:00:00.442) 0:00:25.289 ******** 2026-04-11 04:54:32.607325 | orchestrator | 2026-04-11 04:54:32.607344 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-11 04:54:32.607363 | orchestrator | Saturday 11 April 2026 04:54:10 +0000 (0:00:00.458) 0:00:25.747 ******** 2026-04-11 04:54:32.607381 | orchestrator | 2026-04-11 04:54:32.607399 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-11 04:54:32.607417 | orchestrator | Saturday 11 April 2026 04:54:11 +0000 (0:00:00.797) 0:00:26.545 ******** 2026-04-11 04:54:32.607435 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:54:32.607454 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:54:32.607474 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:54:32.607492 | orchestrator | 2026-04-11 04:54:32.607512 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-11 04:54:32.607532 | orchestrator | Saturday 11 April 2026 04:54:21 +0000 (0:00:10.336) 0:00:36.882 ******** 2026-04-11 04:54:32.607552 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:54:32.607566 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:54:32.607579 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:54:32.607591 | orchestrator | 2026-04-11 04:54:32.607602 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:54:32.607614 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 04:54:32.607637 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 04:54:32.607647 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 04:54:32.607658 | orchestrator | 2026-04-11 04:54:32.607669 | orchestrator | 2026-04-11 04:54:32.607680 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:54:32.607690 | orchestrator | Saturday 11 April 2026 04:54:32 +0000 (0:00:10.895) 0:00:47.778 ******** 2026-04-11 04:54:32.607701 | orchestrator | =============================================================================== 2026-04-11 04:54:32.607711 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.90s 2026-04-11 04:54:32.607722 | orchestrator | redis : Restart redis container ---------------------------------------- 10.34s 2026-04-11 04:54:32.607732 | orchestrator | redis : Copying over redis config files --------------------------------- 4.10s 2026-04-11 04:54:32.607743 | orchestrator | redis : include_tasks --------------------------------------------------- 3.20s 2026-04-11 04:54:32.607753 | orchestrator | redis : Copying over default config.json files -------------------------- 3.16s 2026-04-11 04:54:32.607764 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.08s 2026-04-11 04:54:32.607775 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.75s 2026-04-11 04:54:32.607785 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.98s 2026-04-11 04:54:32.607796 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.70s 2026-04-11 04:54:32.607806 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.70s 2026-04-11 04:54:32.607817 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.66s 2026-04-11 04:54:32.607827 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.40s 2026-04-11 04:54:32.801631 | orchestrator | + osism apply -a upgrade mariadb 2026-04-11 04:54:34.119920 | orchestrator | 2026-04-11 04:54:34 | INFO  | Prepare task for execution of mariadb. 2026-04-11 04:54:34.187749 | orchestrator | 2026-04-11 04:54:34 | INFO  | Task 9f5f8935-fbe8-4f85-a7cb-810768588309 (mariadb) was prepared for execution. 2026-04-11 04:54:34.187845 | orchestrator | 2026-04-11 04:54:34 | INFO  | It takes a moment until task 9f5f8935-fbe8-4f85-a7cb-810768588309 (mariadb) has been started and output is visible here. 2026-04-11 04:55:01.216410 | orchestrator | 2026-04-11 04:55:01.216525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 04:55:01.216541 | orchestrator | 2026-04-11 04:55:01.216553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 04:55:01.216564 | orchestrator | Saturday 11 April 2026 04:54:39 +0000 (0:00:01.845) 0:00:01.845 ******** 2026-04-11 04:55:01.216575 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:55:01.216587 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:55:01.216597 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:55:01.216608 | orchestrator | 2026-04-11 04:55:01.216619 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 04:55:01.216630 | orchestrator | Saturday 11 April 2026 04:54:41 +0000 (0:00:01.692) 0:00:03.539 ******** 2026-04-11 04:55:01.216641 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-11 04:55:01.216652 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-11 04:55:01.216663 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-11 04:55:01.216673 | orchestrator | 2026-04-11 04:55:01.216684 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-11 04:55:01.216695 | orchestrator | 2026-04-11 04:55:01.216706 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-11 04:55:01.216740 | orchestrator | Saturday 11 April 2026 04:54:43 +0000 (0:00:02.234) 0:00:05.773 ******** 2026-04-11 04:55:01.216752 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 04:55:01.216762 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 04:55:01.216781 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 04:55:01.216792 | orchestrator | 2026-04-11 04:55:01.216802 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 04:55:01.216813 | orchestrator | Saturday 11 April 2026 04:54:45 +0000 (0:00:01.659) 0:00:07.432 ******** 2026-04-11 04:55:01.216825 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:55:01.216837 | orchestrator | 2026-04-11 04:55:01.216848 | orchestrator | TASK [mariadb : Remove mariadb-clustercheck] *********************************** 2026-04-11 04:55:01.216858 | orchestrator | Saturday 11 April 2026 04:54:47 +0000 (0:00:02.463) 0:00:09.896 ******** 2026-04-11 04:55:01.216869 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:55:01.216879 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:55:01.216890 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:55:01.216901 | orchestrator | 2026-04-11 04:55:01.216912 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-11 04:55:01.216925 | orchestrator | Saturday 11 April 2026 04:54:50 +0000 (0:00:02.623) 0:00:12.519 ******** 2026-04-11 04:55:01.216944 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:01.217003 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:01.217041 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:01.217063 | orchestrator | 2026-04-11 04:55:01.217079 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-11 04:55:01.217096 | orchestrator | Saturday 11 April 2026 04:54:53 +0000 (0:00:03.875) 0:00:16.395 ******** 2026-04-11 04:55:01.217115 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:01.217135 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:55:01.217155 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:01.217209 | orchestrator | 2026-04-11 04:55:01.217227 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-11 04:55:01.217240 | orchestrator | Saturday 11 April 2026 04:54:55 +0000 (0:00:01.681) 0:00:18.076 ******** 2026-04-11 04:55:01.217253 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:01.217266 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:01.217279 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:55:01.217290 | orchestrator | 2026-04-11 04:55:01.217300 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-11 04:55:01.217311 | orchestrator | Saturday 11 April 2026 04:54:57 +0000 (0:00:02.188) 0:00:20.265 ******** 2026-04-11 04:55:01.217350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:12.943028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:12.943168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:12.943273 | orchestrator | 2026-04-11 04:55:12.943289 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-11 04:55:12.943303 | orchestrator | Saturday 11 April 2026 04:55:02 +0000 (0:00:04.609) 0:00:24.875 ******** 2026-04-11 04:55:12.943314 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:12.943327 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:12.943338 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:55:12.943349 | orchestrator | 2026-04-11 04:55:12.943360 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-11 04:55:12.943390 | orchestrator | Saturday 11 April 2026 04:55:04 +0000 (0:00:02.021) 0:00:26.896 ******** 2026-04-11 04:55:12.943402 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:55:12.943413 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:55:12.943423 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:55:12.943434 | orchestrator | 2026-04-11 04:55:12.943445 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 04:55:12.943456 | orchestrator | Saturday 11 April 2026 04:55:09 +0000 (0:00:04.658) 0:00:31.555 ******** 2026-04-11 04:55:12.943467 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:55:12.943478 | orchestrator | 2026-04-11 04:55:12.943489 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-11 04:55:12.943500 | orchestrator | Saturday 11 April 2026 04:55:10 +0000 (0:00:01.699) 0:00:33.254 ******** 2026-04-11 04:55:12.943512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:12.943532 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:12.943561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:19.986560 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:19.986663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:19.986699 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:19.986709 | orchestrator | 2026-04-11 04:55:19.986719 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-11 04:55:19.986729 | orchestrator | Saturday 11 April 2026 04:55:14 +0000 (0:00:03.301) 0:00:36.556 ******** 2026-04-11 04:55:19.986752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:19.986762 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:19.986790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:19.986814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:19.986825 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:19.986834 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:19.986842 | orchestrator | 2026-04-11 04:55:19.986851 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-11 04:55:19.986860 | orchestrator | Saturday 11 April 2026 04:55:17 +0000 (0:00:03.583) 0:00:40.139 ******** 2026-04-11 04:55:19.986878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:25.014603 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:25.014738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:25.014757 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:25.014769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:25.014800 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:25.014811 | orchestrator | 2026-04-11 04:55:25.014821 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-11 04:55:25.014832 | orchestrator | Saturday 11 April 2026 04:55:21 +0000 (0:00:04.111) 0:00:44.251 ******** 2026-04-11 04:55:25.014861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:25.014879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:25.014906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-11 04:55:40.559162 | orchestrator | 2026-04-11 04:55:40.559307 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-11 04:55:40.559325 | orchestrator | Saturday 11 April 2026 04:55:26 +0000 (0:00:04.315) 0:00:48.567 ******** 2026-04-11 04:55:40.559338 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 04:55:40.559351 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:55:40.559363 | orchestrator | } 2026-04-11 04:55:40.559374 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 04:55:40.559386 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:55:40.559397 | orchestrator | } 2026-04-11 04:55:40.559408 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 04:55:40.559419 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 04:55:40.559430 | orchestrator | } 2026-04-11 04:55:40.559441 | orchestrator | 2026-04-11 04:55:40.559476 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 04:55:40.559498 | orchestrator | Saturday 11 April 2026 04:55:27 +0000 (0:00:01.438) 0:00:50.006 ******** 2026-04-11 04:55:40.559522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:40.559603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:40.559629 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:40.559647 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:40.559675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:40.559711 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:40.559726 | orchestrator | 2026-04-11 04:55:40.559739 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-11 04:55:40.559751 | orchestrator | Saturday 11 April 2026 04:55:31 +0000 (0:00:04.047) 0:00:54.054 ******** 2026-04-11 04:55:40.559764 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:40.559776 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:40.559789 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:40.559801 | orchestrator | 2026-04-11 04:55:40.559814 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-11 04:55:40.559827 | orchestrator | Saturday 11 April 2026 04:55:33 +0000 (0:00:01.594) 0:00:55.648 ******** 2026-04-11 04:55:40.559839 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:40.559852 | orchestrator | 2026-04-11 04:55:40.559865 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-11 04:55:40.559876 | orchestrator | Saturday 11 April 2026 04:55:34 +0000 (0:00:01.161) 0:00:56.810 ******** 2026-04-11 04:55:40.559887 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:40.559898 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:40.559909 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:40.559920 | orchestrator | 2026-04-11 04:55:40.559931 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-11 04:55:40.559942 | orchestrator | Saturday 11 April 2026 04:55:35 +0000 (0:00:01.374) 0:00:58.184 ******** 2026-04-11 04:55:40.559953 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:40.559964 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:40.559974 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:40.559985 | orchestrator | 2026-04-11 04:55:40.559996 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-11 04:55:40.560007 | orchestrator | Saturday 11 April 2026 04:55:37 +0000 (0:00:01.432) 0:00:59.617 ******** 2026-04-11 04:55:40.560018 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:40.560029 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:40.560040 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:40.560050 | orchestrator | 2026-04-11 04:55:40.560061 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-11 04:55:40.560072 | orchestrator | Saturday 11 April 2026 04:55:38 +0000 (0:00:01.634) 0:01:01.251 ******** 2026-04-11 04:55:40.560083 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:40.560094 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:40.560105 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:40.560116 | orchestrator | 2026-04-11 04:55:40.560126 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-11 04:55:40.560137 | orchestrator | Saturday 11 April 2026 04:55:40 +0000 (0:00:01.329) 0:01:02.581 ******** 2026-04-11 04:55:40.560148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:40.560159 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:40.560170 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:40.560181 | orchestrator | 2026-04-11 04:55:40.560233 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-11 04:55:58.448313 | orchestrator | Saturday 11 April 2026 04:55:41 +0000 (0:00:01.405) 0:01:03.987 ******** 2026-04-11 04:55:58.448431 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.448448 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.448459 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.448470 | orchestrator | 2026-04-11 04:55:58.448482 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-11 04:55:58.448494 | orchestrator | Saturday 11 April 2026 04:55:42 +0000 (0:00:01.346) 0:01:05.334 ******** 2026-04-11 04:55:58.448505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 04:55:58.448532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 04:55:58.448543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 04:55:58.448554 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.448565 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 04:55:58.448576 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 04:55:58.448587 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 04:55:58.448597 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.448608 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 04:55:58.448619 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 04:55:58.448629 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 04:55:58.448640 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.448651 | orchestrator | 2026-04-11 04:55:58.448662 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-11 04:55:58.448673 | orchestrator | Saturday 11 April 2026 04:55:44 +0000 (0:00:01.670) 0:01:07.004 ******** 2026-04-11 04:55:58.448683 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.448694 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.448705 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.448716 | orchestrator | 2026-04-11 04:55:58.448727 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-11 04:55:58.448738 | orchestrator | Saturday 11 April 2026 04:55:45 +0000 (0:00:01.372) 0:01:08.376 ******** 2026-04-11 04:55:58.448749 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.448760 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.448771 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.448781 | orchestrator | 2026-04-11 04:55:58.448792 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-11 04:55:58.448803 | orchestrator | Saturday 11 April 2026 04:55:47 +0000 (0:00:01.470) 0:01:09.847 ******** 2026-04-11 04:55:58.448814 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.448825 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.448835 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.448846 | orchestrator | 2026-04-11 04:55:58.448857 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-11 04:55:58.448868 | orchestrator | Saturday 11 April 2026 04:55:48 +0000 (0:00:01.451) 0:01:11.298 ******** 2026-04-11 04:55:58.448879 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.448890 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.448900 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.448911 | orchestrator | 2026-04-11 04:55:58.448922 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-11 04:55:58.448933 | orchestrator | Saturday 11 April 2026 04:55:50 +0000 (0:00:01.371) 0:01:12.669 ******** 2026-04-11 04:55:58.448944 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.448954 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.448965 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.448976 | orchestrator | 2026-04-11 04:55:58.448987 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-11 04:55:58.448997 | orchestrator | Saturday 11 April 2026 04:55:51 +0000 (0:00:01.486) 0:01:14.156 ******** 2026-04-11 04:55:58.449028 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.449039 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.449050 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.449060 | orchestrator | 2026-04-11 04:55:58.449071 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-11 04:55:58.449082 | orchestrator | Saturday 11 April 2026 04:55:53 +0000 (0:00:01.371) 0:01:15.528 ******** 2026-04-11 04:55:58.449093 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.449104 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.449114 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.449125 | orchestrator | 2026-04-11 04:55:58.449136 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-11 04:55:58.449146 | orchestrator | Saturday 11 April 2026 04:55:54 +0000 (0:00:01.513) 0:01:17.042 ******** 2026-04-11 04:55:58.449157 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.449168 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.449178 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:55:58.449189 | orchestrator | 2026-04-11 04:55:58.449199 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-11 04:55:58.449235 | orchestrator | Saturday 11 April 2026 04:55:56 +0000 (0:00:01.446) 0:01:18.488 ******** 2026-04-11 04:55:58.449283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:58.449300 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:55:58.449313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:55:58.449333 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:55:58.449359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:56:16.238742 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:56:16.238892 | orchestrator | 2026-04-11 04:56:16.238923 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-11 04:56:16.238944 | orchestrator | Saturday 11 April 2026 04:55:59 +0000 (0:00:03.501) 0:01:21.990 ******** 2026-04-11 04:56:16.238963 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:56:16.238976 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:56:16.238987 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:56:16.238998 | orchestrator | 2026-04-11 04:56:16.239010 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-11 04:56:16.239021 | orchestrator | Saturday 11 April 2026 04:56:00 +0000 (0:00:01.381) 0:01:23.371 ******** 2026-04-11 04:56:16.239036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:56:16.239078 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:56:16.239134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:56:16.239160 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:56:16.239179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-11 04:56:16.239213 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:56:16.239263 | orchestrator | 2026-04-11 04:56:16.239283 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-11 04:56:16.239303 | orchestrator | Saturday 11 April 2026 04:56:04 +0000 (0:00:03.641) 0:01:27.013 ******** 2026-04-11 04:56:16.239322 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:56:16.239341 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:56:16.239362 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:56:16.239383 | orchestrator | 2026-04-11 04:56:16.239402 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-11 04:56:16.239420 | orchestrator | Saturday 11 April 2026 04:56:06 +0000 (0:00:01.696) 0:01:28.709 ******** 2026-04-11 04:56:16.239438 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:56:16.239457 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:56:16.239476 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:56:16.239495 | orchestrator | 2026-04-11 04:56:16.239514 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-11 04:56:16.239534 | orchestrator | Saturday 11 April 2026 04:56:07 +0000 (0:00:01.335) 0:01:30.045 ******** 2026-04-11 04:56:16.239553 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:56:16.239573 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:56:16.239591 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:56:16.239609 | orchestrator | 2026-04-11 04:56:16.239621 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-11 04:56:16.239632 | orchestrator | Saturday 11 April 2026 04:56:09 +0000 (0:00:01.474) 0:01:31.520 ******** 2026-04-11 04:56:16.239642 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:56:16.239653 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:56:16.239664 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:56:16.239675 | orchestrator | 2026-04-11 04:56:16.239685 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-11 04:56:16.239696 | orchestrator | Saturday 11 April 2026 04:56:10 +0000 (0:00:01.713) 0:01:33.233 ******** 2026-04-11 04:56:16.239707 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:56:16.239717 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:56:16.239728 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:56:16.239739 | orchestrator | 2026-04-11 04:56:16.239749 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-11 04:56:16.239760 | orchestrator | Saturday 11 April 2026 04:56:12 +0000 (0:00:01.773) 0:01:35.007 ******** 2026-04-11 04:56:16.239771 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:56:16.239783 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:56:16.239804 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:56:16.239815 | orchestrator | 2026-04-11 04:56:16.239825 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-11 04:56:16.239836 | orchestrator | Saturday 11 April 2026 04:56:14 +0000 (0:00:02.077) 0:01:37.085 ******** 2026-04-11 04:56:16.239847 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:56:16.239858 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:56:16.239868 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:56:16.239879 | orchestrator | 2026-04-11 04:56:16.239889 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-11 04:56:16.239900 | orchestrator | Saturday 11 April 2026 04:56:16 +0000 (0:00:01.445) 0:01:38.531 ******** 2026-04-11 04:56:16.239922 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.049951 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.050107 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.050122 | orchestrator | 2026-04-11 04:58:57.050132 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-11 04:58:57.050143 | orchestrator | Saturday 11 April 2026 04:56:17 +0000 (0:00:01.405) 0:01:39.936 ******** 2026-04-11 04:58:57.050151 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.050159 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.050168 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.050176 | orchestrator | 2026-04-11 04:58:57.050184 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-11 04:58:57.050192 | orchestrator | Saturday 11 April 2026 04:56:19 +0000 (0:00:01.729) 0:01:41.665 ******** 2026-04-11 04:58:57.050200 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.050208 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.050220 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.050234 | orchestrator | 2026-04-11 04:58:57.050248 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-11 04:58:57.050261 | orchestrator | Saturday 11 April 2026 04:56:21 +0000 (0:00:01.753) 0:01:43.419 ******** 2026-04-11 04:58:57.050275 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:58:57.050289 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.050303 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.050317 | orchestrator | 2026-04-11 04:58:57.050331 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-11 04:58:57.050345 | orchestrator | Saturday 11 April 2026 04:56:22 +0000 (0:00:01.399) 0:01:44.819 ******** 2026-04-11 04:58:57.050360 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.050374 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.050388 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.050402 | orchestrator | 2026-04-11 04:58:57.050416 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-11 04:58:57.050464 | orchestrator | Saturday 11 April 2026 04:56:25 +0000 (0:00:03.311) 0:01:48.130 ******** 2026-04-11 04:58:57.050474 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.050483 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.050492 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.050501 | orchestrator | 2026-04-11 04:58:57.050511 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-11 04:58:57.050520 | orchestrator | Saturday 11 April 2026 04:56:27 +0000 (0:00:01.390) 0:01:49.521 ******** 2026-04-11 04:58:57.050530 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.050538 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.050547 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.050556 | orchestrator | 2026-04-11 04:58:57.050566 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-11 04:58:57.050575 | orchestrator | Saturday 11 April 2026 04:56:28 +0000 (0:00:01.364) 0:01:50.886 ******** 2026-04-11 04:58:57.050585 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:58:57.050595 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.050604 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.050613 | orchestrator | 2026-04-11 04:58:57.050622 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 04:58:57.050655 | orchestrator | Saturday 11 April 2026 04:56:30 +0000 (0:00:01.817) 0:01:52.703 ******** 2026-04-11 04:58:57.050665 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:58:57.050674 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.050683 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.050693 | orchestrator | 2026-04-11 04:58:57.050702 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 04:58:57.050711 | orchestrator | Saturday 11 April 2026 04:56:31 +0000 (0:00:01.369) 0:01:54.073 ******** 2026-04-11 04:58:57.050720 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:58:57.050730 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.050739 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.050749 | orchestrator | 2026-04-11 04:58:57.050758 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-11 04:58:57.050767 | orchestrator | Saturday 11 April 2026 04:56:33 +0000 (0:00:01.793) 0:01:55.866 ******** 2026-04-11 04:58:57.050777 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:58:57.050785 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:58:57.050793 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:58:57.050801 | orchestrator | 2026-04-11 04:58:57.050808 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-11 04:58:57.050829 | orchestrator | Saturday 11 April 2026 04:56:34 +0000 (0:00:01.440) 0:01:57.307 ******** 2026-04-11 04:58:57.050837 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:58:57.050853 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.050861 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.050869 | orchestrator | 2026-04-11 04:58:57.050877 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-11 04:58:57.050885 | orchestrator | 2026-04-11 04:58:57.050924 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-11 04:58:57.050933 | orchestrator | Saturday 11 April 2026 04:56:37 +0000 (0:00:02.274) 0:01:59.581 ******** 2026-04-11 04:58:57.050941 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:58:57.050949 | orchestrator | 2026-04-11 04:58:57.050957 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-11 04:58:57.050965 | orchestrator | Saturday 11 April 2026 04:57:02 +0000 (0:00:25.772) 0:02:25.354 ******** 2026-04-11 04:58:57.050972 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.050980 | orchestrator | 2026-04-11 04:58:57.050988 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-11 04:58:57.050996 | orchestrator | Saturday 11 April 2026 04:57:08 +0000 (0:00:05.618) 0:02:30.972 ******** 2026-04-11 04:58:57.051007 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.051015 | orchestrator | 2026-04-11 04:58:57.051023 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-11 04:58:57.051031 | orchestrator | 2026-04-11 04:58:57.051038 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-11 04:58:57.051046 | orchestrator | Saturday 11 April 2026 04:57:11 +0000 (0:00:02.914) 0:02:33.887 ******** 2026-04-11 04:58:57.051054 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:58:57.051062 | orchestrator | 2026-04-11 04:58:57.051070 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-11 04:58:57.051094 | orchestrator | Saturday 11 April 2026 04:57:37 +0000 (0:00:26.164) 0:03:00.051 ******** 2026-04-11 04:58:57.051103 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-04-11 04:58:57.051112 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.051120 | orchestrator | 2026-04-11 04:58:57.051128 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-11 04:58:57.051135 | orchestrator | Saturday 11 April 2026 04:57:45 +0000 (0:00:07.824) 0:03:07.875 ******** 2026-04-11 04:58:57.051143 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.051151 | orchestrator | 2026-04-11 04:58:57.051159 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-11 04:58:57.051173 | orchestrator | 2026-04-11 04:58:57.051181 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-11 04:58:57.051189 | orchestrator | Saturday 11 April 2026 04:57:48 +0000 (0:00:02.981) 0:03:10.857 ******** 2026-04-11 04:58:57.051197 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:58:57.051204 | orchestrator | 2026-04-11 04:58:57.051212 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-11 04:58:57.051220 | orchestrator | Saturday 11 April 2026 04:58:15 +0000 (0:00:26.657) 0:03:37.514 ******** 2026-04-11 04:58:57.051228 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-04-11 04:58:57.051236 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.051243 | orchestrator | 2026-04-11 04:58:57.051251 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-11 04:58:57.051259 | orchestrator | Saturday 11 April 2026 04:58:23 +0000 (0:00:08.097) 0:03:45.612 ******** 2026-04-11 04:58:57.051267 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.051275 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-11 04:58:57.051282 | orchestrator | 2026-04-11 04:58:57.051290 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-11 04:58:57.051298 | orchestrator | skipping: no hosts matched 2026-04-11 04:58:57.051306 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-11 04:58:57.051314 | orchestrator | mariadb_bootstrap_restart 2026-04-11 04:58:57.051321 | orchestrator | 2026-04-11 04:58:57.051329 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-11 04:58:57.051337 | orchestrator | skipping: no hosts matched 2026-04-11 04:58:57.051345 | orchestrator | 2026-04-11 04:58:57.051352 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-11 04:58:57.051360 | orchestrator | 2026-04-11 04:58:57.051368 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-11 04:58:57.051375 | orchestrator | Saturday 11 April 2026 04:58:28 +0000 (0:00:05.127) 0:03:50.740 ******** 2026-04-11 04:58:57.051383 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:58:57.051391 | orchestrator | 2026-04-11 04:58:57.051399 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-11 04:58:57.051406 | orchestrator | Saturday 11 April 2026 04:58:30 +0000 (0:00:01.921) 0:03:52.661 ******** 2026-04-11 04:58:57.051414 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.051422 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.051454 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.051462 | orchestrator | 2026-04-11 04:58:57.051470 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-11 04:58:57.051477 | orchestrator | Saturday 11 April 2026 04:58:33 +0000 (0:00:03.113) 0:03:55.775 ******** 2026-04-11 04:58:57.051485 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.051493 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.051501 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:58:57.051509 | orchestrator | 2026-04-11 04:58:57.051517 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-11 04:58:57.051524 | orchestrator | Saturday 11 April 2026 04:58:36 +0000 (0:00:03.157) 0:03:58.933 ******** 2026-04-11 04:58:57.051532 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.051540 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.051548 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.051556 | orchestrator | 2026-04-11 04:58:57.051564 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-11 04:58:57.051571 | orchestrator | Saturday 11 April 2026 04:58:39 +0000 (0:00:03.097) 0:04:02.030 ******** 2026-04-11 04:58:57.051579 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.051587 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.051595 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:58:57.051608 | orchestrator | 2026-04-11 04:58:57.051616 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-11 04:58:57.051623 | orchestrator | Saturday 11 April 2026 04:58:42 +0000 (0:00:03.047) 0:04:05.078 ******** 2026-04-11 04:58:57.051631 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.051639 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.051647 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.051654 | orchestrator | 2026-04-11 04:58:57.051662 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-11 04:58:57.051670 | orchestrator | Saturday 11 April 2026 04:58:49 +0000 (0:00:06.530) 0:04:11.608 ******** 2026-04-11 04:58:57.051677 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:58:57.051685 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.051693 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.051701 | orchestrator | 2026-04-11 04:58:57.051708 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-11 04:58:57.051720 | orchestrator | Saturday 11 April 2026 04:58:52 +0000 (0:00:03.509) 0:04:15.118 ******** 2026-04-11 04:58:57.051728 | orchestrator | skipping: [testbed-node-0] 2026-04-11 04:58:57.051736 | orchestrator | skipping: [testbed-node-1] 2026-04-11 04:58:57.051744 | orchestrator | skipping: [testbed-node-2] 2026-04-11 04:58:57.051752 | orchestrator | 2026-04-11 04:58:57.051759 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-11 04:58:57.051767 | orchestrator | Saturday 11 April 2026 04:58:54 +0000 (0:00:01.396) 0:04:16.514 ******** 2026-04-11 04:58:57.051775 | orchestrator | ok: [testbed-node-0] 2026-04-11 04:58:57.051783 | orchestrator | ok: [testbed-node-1] 2026-04-11 04:58:57.051791 | orchestrator | ok: [testbed-node-2] 2026-04-11 04:58:57.051799 | orchestrator | 2026-04-11 04:58:57.051812 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-11 04:59:18.545051 | orchestrator | Saturday 11 April 2026 04:58:57 +0000 (0:00:03.722) 0:04:20.237 ******** 2026-04-11 04:59:18.545173 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 04:59:18.545189 | orchestrator | 2026-04-11 04:59:18.545201 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-04-11 04:59:18.545212 | orchestrator | Saturday 11 April 2026 04:58:59 +0000 (0:00:01.920) 0:04:22.157 ******** 2026-04-11 04:59:18.545224 | orchestrator | changed: [testbed-node-0] 2026-04-11 04:59:18.545237 | orchestrator | changed: [testbed-node-1] 2026-04-11 04:59:18.545248 | orchestrator | changed: [testbed-node-2] 2026-04-11 04:59:18.545258 | orchestrator | 2026-04-11 04:59:18.545270 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 04:59:18.545282 | orchestrator | testbed-node-0 : ok=35  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-11 04:59:18.545294 | orchestrator | testbed-node-1 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-11 04:59:18.545305 | orchestrator | testbed-node-2 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-11 04:59:18.545315 | orchestrator | 2026-04-11 04:59:18.545326 | orchestrator | 2026-04-11 04:59:18.545337 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 04:59:18.545348 | orchestrator | Saturday 11 April 2026 04:59:18 +0000 (0:00:18.421) 0:04:40.578 ******** 2026-04-11 04:59:18.545359 | orchestrator | =============================================================================== 2026-04-11 04:59:18.545369 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 78.59s 2026-04-11 04:59:18.545380 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.54s 2026-04-11 04:59:18.545391 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 18.42s 2026-04-11 04:59:18.545401 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 11.02s 2026-04-11 04:59:18.545438 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.53s 2026-04-11 04:59:18.545449 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.66s 2026-04-11 04:59:18.545460 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.61s 2026-04-11 04:59:18.545470 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.32s 2026-04-11 04:59:18.545481 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.11s 2026-04-11 04:59:18.545557 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.05s 2026-04-11 04:59:18.545571 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.88s 2026-04-11 04:59:18.545582 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.72s 2026-04-11 04:59:18.545595 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.64s 2026-04-11 04:59:18.545607 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.58s 2026-04-11 04:59:18.545620 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.51s 2026-04-11 04:59:18.545633 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.50s 2026-04-11 04:59:18.545645 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.31s 2026-04-11 04:59:18.545658 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.30s 2026-04-11 04:59:18.545670 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.16s 2026-04-11 04:59:18.545682 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.11s 2026-04-11 04:59:18.744426 | orchestrator | + osism apply -a upgrade rabbitmq 2026-04-11 04:59:20.039626 | orchestrator | 2026-04-11 04:59:20 | INFO  | Prepare task for execution of rabbitmq. 2026-04-11 04:59:20.106627 | orchestrator | 2026-04-11 04:59:20 | INFO  | Task b7d0e4ff-3255-4d1f-b141-0a0bb78765a9 (rabbitmq) was prepared for execution. 2026-04-11 04:59:20.106723 | orchestrator | 2026-04-11 04:59:20 | INFO  | It takes a moment until task b7d0e4ff-3255-4d1f-b141-0a0bb78765a9 (rabbitmq) has been started and output is visible here. 2026-04-11 05:00:02.788782 | orchestrator | 2026-04-11 05:00:02.788889 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 05:00:02.788903 | orchestrator | 2026-04-11 05:00:02.788913 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 05:00:02.788924 | orchestrator | Saturday 11 April 2026 04:59:25 +0000 (0:00:01.686) 0:00:01.686 ******** 2026-04-11 05:00:02.788934 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:00:02.788961 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:00:02.788972 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:00:02.788982 | orchestrator | 2026-04-11 05:00:02.788992 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 05:00:02.789002 | orchestrator | Saturday 11 April 2026 04:59:26 +0000 (0:00:01.837) 0:00:03.524 ******** 2026-04-11 05:00:02.789012 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-11 05:00:02.789023 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-11 05:00:02.789032 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-11 05:00:02.789042 | orchestrator | 2026-04-11 05:00:02.789052 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-11 05:00:02.789062 | orchestrator | 2026-04-11 05:00:02.789072 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 05:00:02.789081 | orchestrator | Saturday 11 April 2026 04:59:29 +0000 (0:00:02.891) 0:00:06.416 ******** 2026-04-11 05:00:02.789092 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 05:00:02.789103 | orchestrator | 2026-04-11 05:00:02.789113 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-11 05:00:02.789143 | orchestrator | Saturday 11 April 2026 04:59:32 +0000 (0:00:03.080) 0:00:09.496 ******** 2026-04-11 05:00:02.789154 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:00:02.789164 | orchestrator | 2026-04-11 05:00:02.789173 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-11 05:00:02.789183 | orchestrator | Saturday 11 April 2026 04:59:35 +0000 (0:00:02.697) 0:00:12.194 ******** 2026-04-11 05:00:02.789193 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:00:02.789202 | orchestrator | 2026-04-11 05:00:02.789212 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-11 05:00:02.789222 | orchestrator | Saturday 11 April 2026 04:59:38 +0000 (0:00:02.899) 0:00:15.094 ******** 2026-04-11 05:00:02.789232 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:00:02.789242 | orchestrator | 2026-04-11 05:00:02.789252 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-11 05:00:02.789262 | orchestrator | Saturday 11 April 2026 04:59:47 +0000 (0:00:09.226) 0:00:24.320 ******** 2026-04-11 05:00:02.789271 | orchestrator | ok: [testbed-node-0] => { 2026-04-11 05:00:02.789281 | orchestrator |  "changed": false, 2026-04-11 05:00:02.789292 | orchestrator |  "msg": "All assertions passed" 2026-04-11 05:00:02.789302 | orchestrator | } 2026-04-11 05:00:02.789314 | orchestrator | 2026-04-11 05:00:02.789325 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-11 05:00:02.789336 | orchestrator | Saturday 11 April 2026 04:59:49 +0000 (0:00:01.351) 0:00:25.672 ******** 2026-04-11 05:00:02.789349 | orchestrator | ok: [testbed-node-0] => { 2026-04-11 05:00:02.789361 | orchestrator |  "changed": false, 2026-04-11 05:00:02.789372 | orchestrator |  "msg": "All assertions passed" 2026-04-11 05:00:02.789384 | orchestrator | } 2026-04-11 05:00:02.789395 | orchestrator | 2026-04-11 05:00:02.789407 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 05:00:02.789420 | orchestrator | Saturday 11 April 2026 04:59:50 +0000 (0:00:01.643) 0:00:27.315 ******** 2026-04-11 05:00:02.789431 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 05:00:02.789468 | orchestrator | 2026-04-11 05:00:02.789479 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-11 05:00:02.789491 | orchestrator | Saturday 11 April 2026 04:59:52 +0000 (0:00:01.934) 0:00:29.249 ******** 2026-04-11 05:00:02.789503 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:00:02.789514 | orchestrator | 2026-04-11 05:00:02.789526 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-11 05:00:02.789537 | orchestrator | Saturday 11 April 2026 04:59:54 +0000 (0:00:02.167) 0:00:31.417 ******** 2026-04-11 05:00:02.789548 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:00:02.789560 | orchestrator | 2026-04-11 05:00:02.789571 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-11 05:00:02.789582 | orchestrator | Saturday 11 April 2026 04:59:57 +0000 (0:00:02.803) 0:00:34.221 ******** 2026-04-11 05:00:02.789594 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:00:02.789606 | orchestrator | 2026-04-11 05:00:02.789617 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-11 05:00:02.789628 | orchestrator | Saturday 11 April 2026 04:59:59 +0000 (0:00:01.676) 0:00:35.898 ******** 2026-04-11 05:00:02.789666 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:02.789696 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:02.789709 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:02.789720 | orchestrator | 2026-04-11 05:00:02.789730 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-11 05:00:02.789740 | orchestrator | Saturday 11 April 2026 05:00:01 +0000 (0:00:02.165) 0:00:38.063 ******** 2026-04-11 05:00:02.789751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:02.789775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:22.812641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:22.812763 | orchestrator | 2026-04-11 05:00:22.812781 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-11 05:00:22.812795 | orchestrator | Saturday 11 April 2026 05:00:03 +0000 (0:00:02.494) 0:00:40.557 ******** 2026-04-11 05:00:22.812806 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 05:00:22.812818 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 05:00:22.812829 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-11 05:00:22.812839 | orchestrator | 2026-04-11 05:00:22.812850 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-11 05:00:22.812860 | orchestrator | Saturday 11 April 2026 05:00:06 +0000 (0:00:02.438) 0:00:42.995 ******** 2026-04-11 05:00:22.812870 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 05:00:22.812880 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 05:00:22.812890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-11 05:00:22.812899 | orchestrator | 2026-04-11 05:00:22.812909 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-11 05:00:22.812920 | orchestrator | Saturday 11 April 2026 05:00:09 +0000 (0:00:02.638) 0:00:45.634 ******** 2026-04-11 05:00:22.812931 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 05:00:22.812942 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 05:00:22.812953 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-11 05:00:22.812963 | orchestrator | 2026-04-11 05:00:22.812974 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-11 05:00:22.813007 | orchestrator | Saturday 11 April 2026 05:00:11 +0000 (0:00:02.323) 0:00:47.958 ******** 2026-04-11 05:00:22.813018 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 05:00:22.813028 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 05:00:22.813037 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-11 05:00:22.813043 | orchestrator | 2026-04-11 05:00:22.813050 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-11 05:00:22.813056 | orchestrator | Saturday 11 April 2026 05:00:13 +0000 (0:00:02.500) 0:00:50.459 ******** 2026-04-11 05:00:22.813062 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 05:00:22.813068 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 05:00:22.813075 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-11 05:00:22.813081 | orchestrator | 2026-04-11 05:00:22.813087 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-11 05:00:22.813093 | orchestrator | Saturday 11 April 2026 05:00:16 +0000 (0:00:02.264) 0:00:52.723 ******** 2026-04-11 05:00:22.813099 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 05:00:22.813105 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 05:00:22.813111 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-11 05:00:22.813117 | orchestrator | 2026-04-11 05:00:22.813135 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-11 05:00:22.813142 | orchestrator | Saturday 11 April 2026 05:00:18 +0000 (0:00:02.291) 0:00:55.015 ******** 2026-04-11 05:00:22.813148 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 05:00:22.813155 | orchestrator | 2026-04-11 05:00:22.813177 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-11 05:00:22.813185 | orchestrator | Saturday 11 April 2026 05:00:20 +0000 (0:00:01.905) 0:00:56.920 ******** 2026-04-11 05:00:22.813194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:22.813204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:22.813219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:22.813227 | orchestrator | 2026-04-11 05:00:22.813235 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-11 05:00:22.813242 | orchestrator | Saturday 11 April 2026 05:00:22 +0000 (0:00:02.364) 0:00:59.284 ******** 2026-04-11 05:00:22.813260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:00:30.737923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:00:30.738094 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:00:30.738138 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:00:30.738154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:00:30.738167 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:00:30.738178 | orchestrator | 2026-04-11 05:00:30.738190 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-11 05:00:30.738202 | orchestrator | Saturday 11 April 2026 05:00:24 +0000 (0:00:01.477) 0:01:00.762 ******** 2026-04-11 05:00:30.738227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:00:30.738263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:00:30.738276 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:00:30.738288 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:00:30.738299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:00:30.738320 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:00:30.738331 | orchestrator | 2026-04-11 05:00:30.738342 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-11 05:00:30.738353 | orchestrator | Saturday 11 April 2026 05:00:26 +0000 (0:00:01.916) 0:01:02.679 ******** 2026-04-11 05:00:30.738364 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:00:30.738376 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:00:30.738387 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:00:30.738398 | orchestrator | 2026-04-11 05:00:30.738409 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-11 05:00:30.738444 | orchestrator | Saturday 11 April 2026 05:00:29 +0000 (0:00:03.643) 0:01:06.322 ******** 2026-04-11 05:00:30.738457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:00:30.738484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:02:11.109589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-11 05:02:11.109759 | orchestrator | 2026-04-11 05:02:11.109786 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-11 05:02:11.109806 | orchestrator | Saturday 11 April 2026 05:00:31 +0000 (0:00:02.250) 0:01:08.573 ******** 2026-04-11 05:02:11.109824 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 05:02:11.109838 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:11.109851 | orchestrator | } 2026-04-11 05:02:11.109865 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 05:02:11.109878 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:11.109892 | orchestrator | } 2026-04-11 05:02:11.109906 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 05:02:11.109920 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:11.109933 | orchestrator | } 2026-04-11 05:02:11.109947 | orchestrator | 2026-04-11 05:02:11.109960 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 05:02:11.109974 | orchestrator | Saturday 11 April 2026 05:00:33 +0000 (0:00:01.570) 0:01:10.143 ******** 2026-04-11 05:02:11.109989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:02:11.110085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:02:11.110121 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:02:11.110135 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:02:11.110176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-11 05:02:11.110192 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:02:11.110207 | orchestrator | 2026-04-11 05:02:11.110223 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-11 05:02:11.110237 | orchestrator | Saturday 11 April 2026 05:00:35 +0000 (0:00:02.026) 0:01:12.170 ******** 2026-04-11 05:02:11.110250 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:02:11.110264 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:02:11.110278 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:02:11.110292 | orchestrator | 2026-04-11 05:02:11.110306 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-11 05:02:11.110320 | orchestrator | 2026-04-11 05:02:11.110358 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-11 05:02:11.110372 | orchestrator | Saturday 11 April 2026 05:00:37 +0000 (0:00:01.642) 0:01:13.812 ******** 2026-04-11 05:02:11.110386 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:02:11.110401 | orchestrator | 2026-04-11 05:02:11.110416 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-11 05:02:11.110430 | orchestrator | Saturday 11 April 2026 05:00:39 +0000 (0:00:02.058) 0:01:15.870 ******** 2026-04-11 05:02:11.110445 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:02:11.110458 | orchestrator | 2026-04-11 05:02:11.110472 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-11 05:02:11.110485 | orchestrator | Saturday 11 April 2026 05:00:47 +0000 (0:00:08.512) 0:01:24.382 ******** 2026-04-11 05:02:11.110494 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:02:11.110502 | orchestrator | 2026-04-11 05:02:11.110509 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-11 05:02:11.110517 | orchestrator | Saturday 11 April 2026 05:00:56 +0000 (0:00:08.999) 0:01:33.382 ******** 2026-04-11 05:02:11.110525 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:02:11.110533 | orchestrator | 2026-04-11 05:02:11.110540 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-11 05:02:11.110548 | orchestrator | 2026-04-11 05:02:11.110556 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-11 05:02:11.110564 | orchestrator | Saturday 11 April 2026 05:01:05 +0000 (0:00:08.248) 0:01:41.631 ******** 2026-04-11 05:02:11.110572 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:02:11.110580 | orchestrator | 2026-04-11 05:02:11.110587 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-11 05:02:11.110595 | orchestrator | Saturday 11 April 2026 05:01:06 +0000 (0:00:01.698) 0:01:43.329 ******** 2026-04-11 05:02:11.110603 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:02:11.110612 | orchestrator | 2026-04-11 05:02:11.110619 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-11 05:02:11.110636 | orchestrator | Saturday 11 April 2026 05:01:14 +0000 (0:00:08.077) 0:01:51.406 ******** 2026-04-11 05:02:11.110644 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:02:11.110651 | orchestrator | 2026-04-11 05:02:11.110659 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-11 05:02:11.110667 | orchestrator | Saturday 11 April 2026 05:01:28 +0000 (0:00:13.671) 0:02:05.078 ******** 2026-04-11 05:02:11.110675 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:02:11.110683 | orchestrator | 2026-04-11 05:02:11.110690 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-11 05:02:11.110698 | orchestrator | 2026-04-11 05:02:11.110713 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-11 05:02:11.110721 | orchestrator | Saturday 11 April 2026 05:01:37 +0000 (0:00:09.081) 0:02:14.159 ******** 2026-04-11 05:02:11.110728 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:02:11.110736 | orchestrator | 2026-04-11 05:02:11.110744 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-11 05:02:11.110752 | orchestrator | Saturday 11 April 2026 05:01:39 +0000 (0:00:01.727) 0:02:15.887 ******** 2026-04-11 05:02:11.110759 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:02:11.110767 | orchestrator | 2026-04-11 05:02:11.110775 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-11 05:02:11.110783 | orchestrator | Saturday 11 April 2026 05:01:47 +0000 (0:00:08.476) 0:02:24.363 ******** 2026-04-11 05:02:11.110791 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:02:11.110798 | orchestrator | 2026-04-11 05:02:11.110806 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-11 05:02:11.110814 | orchestrator | Saturday 11 April 2026 05:02:01 +0000 (0:00:14.078) 0:02:38.442 ******** 2026-04-11 05:02:11.110822 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:02:11.110829 | orchestrator | 2026-04-11 05:02:11.110858 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-11 05:02:11.110867 | orchestrator | 2026-04-11 05:02:11.110875 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-11 05:02:11.110891 | orchestrator | Saturday 11 April 2026 05:02:11 +0000 (0:00:09.265) 0:02:47.707 ******** 2026-04-11 05:02:17.027592 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 05:02:17.027708 | orchestrator | 2026-04-11 05:02:17.027725 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-11 05:02:17.027738 | orchestrator | Saturday 11 April 2026 05:02:12 +0000 (0:00:01.528) 0:02:49.235 ******** 2026-04-11 05:02:17.027749 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:02:17.027761 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:02:17.027772 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:02:17.027783 | orchestrator | 2026-04-11 05:02:17.027794 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 05:02:17.027806 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 05:02:17.027819 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 05:02:17.027830 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 05:02:17.027841 | orchestrator | 2026-04-11 05:02:17.027852 | orchestrator | 2026-04-11 05:02:17.027863 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 05:02:17.027874 | orchestrator | Saturday 11 April 2026 05:02:16 +0000 (0:00:04.016) 0:02:53.252 ******** 2026-04-11 05:02:17.027885 | orchestrator | =============================================================================== 2026-04-11 05:02:17.027896 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 36.75s 2026-04-11 05:02:17.027906 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 26.59s 2026-04-11 05:02:17.027941 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 25.07s 2026-04-11 05:02:17.027953 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.23s 2026-04-11 05:02:17.027964 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.49s 2026-04-11 05:02:17.027975 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.02s 2026-04-11 05:02:17.027986 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.64s 2026-04-11 05:02:17.027996 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.08s 2026-04-11 05:02:17.028007 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.90s 2026-04-11 05:02:17.028018 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.89s 2026-04-11 05:02:17.028028 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.80s 2026-04-11 05:02:17.028039 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.70s 2026-04-11 05:02:17.028049 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.64s 2026-04-11 05:02:17.028060 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.50s 2026-04-11 05:02:17.028071 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.49s 2026-04-11 05:02:17.028081 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.44s 2026-04-11 05:02:17.028092 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.36s 2026-04-11 05:02:17.028102 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.32s 2026-04-11 05:02:17.028113 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.29s 2026-04-11 05:02:17.028124 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.26s 2026-04-11 05:02:17.217304 | orchestrator | + osism apply -a upgrade openvswitch 2026-04-11 05:02:18.542488 | orchestrator | 2026-04-11 05:02:18 | INFO  | Prepare task for execution of openvswitch. 2026-04-11 05:02:18.609988 | orchestrator | 2026-04-11 05:02:18 | INFO  | Task 88566b97-2289-4232-8a62-e85b462c3ba0 (openvswitch) was prepared for execution. 2026-04-11 05:02:18.610190 | orchestrator | 2026-04-11 05:02:18 | INFO  | It takes a moment until task 88566b97-2289-4232-8a62-e85b462c3ba0 (openvswitch) has been started and output is visible here. 2026-04-11 05:02:43.905440 | orchestrator | 2026-04-11 05:02:43.905543 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 05:02:43.905554 | orchestrator | 2026-04-11 05:02:43.905562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 05:02:43.905570 | orchestrator | Saturday 11 April 2026 05:02:23 +0000 (0:00:01.535) 0:00:01.535 ******** 2026-04-11 05:02:43.905577 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:02:43.905585 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:02:43.905592 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:02:43.905599 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:02:43.905606 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:02:43.905614 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:02:43.905622 | orchestrator | 2026-04-11 05:02:43.905631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 05:02:43.905639 | orchestrator | Saturday 11 April 2026 05:02:26 +0000 (0:00:02.643) 0:00:04.179 ******** 2026-04-11 05:02:43.905647 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 05:02:43.905657 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 05:02:43.905665 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 05:02:43.905673 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 05:02:43.905701 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 05:02:43.905710 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-11 05:02:43.905718 | orchestrator | 2026-04-11 05:02:43.905726 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-11 05:02:43.905733 | orchestrator | 2026-04-11 05:02:43.905741 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-11 05:02:43.905749 | orchestrator | Saturday 11 April 2026 05:02:28 +0000 (0:00:02.306) 0:00:06.485 ******** 2026-04-11 05:02:43.905758 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 05:02:43.905767 | orchestrator | 2026-04-11 05:02:43.905775 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 05:02:43.905783 | orchestrator | Saturday 11 April 2026 05:02:31 +0000 (0:00:03.339) 0:00:09.825 ******** 2026-04-11 05:02:43.905791 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-11 05:02:43.905800 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-11 05:02:43.905808 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-11 05:02:43.905816 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-11 05:02:43.905824 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-11 05:02:43.905831 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-11 05:02:43.905839 | orchestrator | 2026-04-11 05:02:43.905847 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 05:02:43.905855 | orchestrator | Saturday 11 April 2026 05:02:35 +0000 (0:00:03.708) 0:00:13.534 ******** 2026-04-11 05:02:43.905863 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-11 05:02:43.905871 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-11 05:02:43.905879 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-11 05:02:43.905887 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-11 05:02:43.905895 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-11 05:02:43.905903 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-11 05:02:43.905911 | orchestrator | 2026-04-11 05:02:43.905918 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 05:02:43.905926 | orchestrator | Saturday 11 April 2026 05:02:38 +0000 (0:00:02.851) 0:00:16.386 ******** 2026-04-11 05:02:43.905934 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-11 05:02:43.905943 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:02:43.905952 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-11 05:02:43.905960 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:02:43.905968 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-11 05:02:43.905976 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:02:43.905984 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-11 05:02:43.905991 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:02:43.906002 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-11 05:02:43.906013 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:02:43.906073 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-11 05:02:43.906084 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:02:43.906096 | orchestrator | 2026-04-11 05:02:43.906108 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-11 05:02:43.906120 | orchestrator | Saturday 11 April 2026 05:02:40 +0000 (0:00:02.440) 0:00:18.827 ******** 2026-04-11 05:02:43.906131 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:02:43.906143 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:02:43.906155 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:02:43.906167 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:02:43.906177 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:02:43.906189 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:02:43.906204 | orchestrator | 2026-04-11 05:02:43.906214 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-11 05:02:43.906222 | orchestrator | Saturday 11 April 2026 05:02:42 +0000 (0:00:02.232) 0:00:21.059 ******** 2026-04-11 05:02:43.906260 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:43.906277 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:43.906289 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:43.906349 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:43.906358 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:43.906370 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:43.906391 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:47.480943 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481054 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481071 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481084 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481136 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481150 | orchestrator | 2026-04-11 05:02:47.481163 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-11 05:02:47.481176 | orchestrator | Saturday 11 April 2026 05:02:45 +0000 (0:00:02.760) 0:00:23.820 ******** 2026-04-11 05:02:47.481205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481219 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481230 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481242 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481268 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481279 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:47.481346 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527282 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527423 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527440 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527491 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527505 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527517 | orchestrator | 2026-04-11 05:02:53.527530 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-11 05:02:53.527543 | orchestrator | Saturday 11 April 2026 05:02:49 +0000 (0:00:03.835) 0:00:27.655 ******** 2026-04-11 05:02:53.527554 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:02:53.527567 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:02:53.527578 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:02:53.527588 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:02:53.527599 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:02:53.527610 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:02:53.527621 | orchestrator | 2026-04-11 05:02:53.527632 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-11 05:02:53.527661 | orchestrator | Saturday 11 April 2026 05:02:51 +0000 (0:00:02.359) 0:00:30.015 ******** 2026-04-11 05:02:53.527673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:53.527757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-11 05:02:57.720899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:57.721030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:57.721048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:57.721075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:57.721088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:57.721119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-11 05:02:57.721132 | orchestrator | 2026-04-11 05:02:57.721146 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-11 05:02:57.721159 | orchestrator | Saturday 11 April 2026 05:02:55 +0000 (0:00:03.522) 0:00:33.537 ******** 2026-04-11 05:02:57.721178 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 05:02:57.721191 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:57.721202 | orchestrator | } 2026-04-11 05:02:57.721213 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 05:02:57.721224 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:57.721234 | orchestrator | } 2026-04-11 05:02:57.721245 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 05:02:57.721256 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:57.721267 | orchestrator | } 2026-04-11 05:02:57.721277 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 05:02:57.721352 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:57.721365 | orchestrator | } 2026-04-11 05:02:57.721376 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 05:02:57.721387 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:57.721398 | orchestrator | } 2026-04-11 05:02:57.721409 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 05:02:57.721420 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:02:57.721434 | orchestrator | } 2026-04-11 05:02:57.721447 | orchestrator | 2026-04-11 05:02:57.721460 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 05:02:57.721474 | orchestrator | Saturday 11 April 2026 05:02:57 +0000 (0:00:01.830) 0:00:35.368 ******** 2026-04-11 05:02:57.721488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 05:02:57.721510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 05:02:57.721525 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:02:57.721540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 05:02:57.721554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 05:02:57.721584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 05:03:31.896674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 05:03:31.896779 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:03:31.896790 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:03:31.896799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 05:03:31.896823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 05:03:31.896830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 05:03:31.896855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 05:03:31.896861 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:03:31.896867 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:03:31.896890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-11 05:03:31.896897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-11 05:03:31.896903 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:03:31.896909 | orchestrator | 2026-04-11 05:03:31.896917 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 05:03:31.896924 | orchestrator | Saturday 11 April 2026 05:02:59 +0000 (0:00:02.629) 0:00:37.998 ******** 2026-04-11 05:03:31.896930 | orchestrator | 2026-04-11 05:03:31.896936 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 05:03:31.896942 | orchestrator | Saturday 11 April 2026 05:03:00 +0000 (0:00:00.752) 0:00:38.751 ******** 2026-04-11 05:03:31.896947 | orchestrator | 2026-04-11 05:03:31.896953 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 05:03:31.896964 | orchestrator | Saturday 11 April 2026 05:03:01 +0000 (0:00:00.523) 0:00:39.274 ******** 2026-04-11 05:03:31.896970 | orchestrator | 2026-04-11 05:03:31.896976 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 05:03:31.896982 | orchestrator | Saturday 11 April 2026 05:03:01 +0000 (0:00:00.543) 0:00:39.817 ******** 2026-04-11 05:03:31.896987 | orchestrator | 2026-04-11 05:03:31.896993 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 05:03:31.897000 | orchestrator | Saturday 11 April 2026 05:03:02 +0000 (0:00:00.533) 0:00:40.351 ******** 2026-04-11 05:03:31.897006 | orchestrator | 2026-04-11 05:03:31.897012 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-11 05:03:31.897030 | orchestrator | Saturday 11 April 2026 05:03:02 +0000 (0:00:00.528) 0:00:40.879 ******** 2026-04-11 05:03:31.897036 | orchestrator | 2026-04-11 05:03:31.897042 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-11 05:03:31.897047 | orchestrator | Saturday 11 April 2026 05:03:03 +0000 (0:00:00.841) 0:00:41.721 ******** 2026-04-11 05:03:31.897053 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:03:31.897059 | orchestrator | changed: [testbed-node-3] 2026-04-11 05:03:31.897065 | orchestrator | changed: [testbed-node-5] 2026-04-11 05:03:31.897070 | orchestrator | changed: [testbed-node-4] 2026-04-11 05:03:31.897076 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:03:31.897081 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:03:31.897087 | orchestrator | 2026-04-11 05:03:31.897093 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-11 05:03:31.897101 | orchestrator | Saturday 11 April 2026 05:03:15 +0000 (0:00:11.986) 0:00:53.708 ******** 2026-04-11 05:03:31.897107 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:03:31.897115 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:03:31.897121 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:03:31.897127 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:03:31.897133 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:03:31.897139 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:03:31.897145 | orchestrator | 2026-04-11 05:03:31.897151 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-11 05:03:31.897157 | orchestrator | Saturday 11 April 2026 05:03:17 +0000 (0:00:02.337) 0:00:56.045 ******** 2026-04-11 05:03:31.897163 | orchestrator | changed: [testbed-node-3] 2026-04-11 05:03:31.897169 | orchestrator | changed: [testbed-node-5] 2026-04-11 05:03:31.897175 | orchestrator | changed: [testbed-node-4] 2026-04-11 05:03:31.897182 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:03:31.897188 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:03:31.897194 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:03:31.897200 | orchestrator | 2026-04-11 05:03:31.897207 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-11 05:03:31.897214 | orchestrator | Saturday 11 April 2026 05:03:28 +0000 (0:00:11.076) 0:01:07.122 ******** 2026-04-11 05:03:31.897221 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-11 05:03:31.897229 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-11 05:03:31.897237 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-11 05:03:31.897244 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-11 05:03:31.897251 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-11 05:03:31.897297 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-11 05:03:44.975156 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-11 05:03:44.975325 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-11 05:03:44.975352 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-11 05:03:44.975373 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-11 05:03:44.975385 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-11 05:03:44.975396 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-11 05:03:44.975407 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 05:03:44.975440 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 05:03:44.975454 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 05:03:44.975474 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 05:03:44.975490 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 05:03:44.975508 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-11 05:03:44.975526 | orchestrator | 2026-04-11 05:03:44.975546 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-11 05:03:44.975565 | orchestrator | Saturday 11 April 2026 05:03:37 +0000 (0:00:08.032) 0:01:15.154 ******** 2026-04-11 05:03:44.975601 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-11 05:03:44.975620 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:03:44.975641 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-11 05:03:44.975659 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:03:44.975678 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-11 05:03:44.975692 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:03:44.975705 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-04-11 05:03:44.975719 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-04-11 05:03:44.975732 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-04-11 05:03:44.975744 | orchestrator | 2026-04-11 05:03:44.975757 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-11 05:03:44.975770 | orchestrator | Saturday 11 April 2026 05:03:40 +0000 (0:00:03.281) 0:01:18.436 ******** 2026-04-11 05:03:44.975783 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-11 05:03:44.975795 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:03:44.975809 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-11 05:03:44.975821 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:03:44.975834 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-11 05:03:44.975846 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:03:44.975858 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-11 05:03:44.975871 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-11 05:03:44.975883 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-11 05:03:44.975896 | orchestrator | 2026-04-11 05:03:44.975909 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 05:03:44.975923 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 05:03:44.975937 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 05:03:44.975950 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 05:03:44.975966 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 05:03:44.975985 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 05:03:44.976004 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 05:03:44.976022 | orchestrator | 2026-04-11 05:03:44.976040 | orchestrator | 2026-04-11 05:03:44.976070 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 05:03:44.976087 | orchestrator | Saturday 11 April 2026 05:03:44 +0000 (0:00:04.246) 0:01:22.682 ******** 2026-04-11 05:03:44.976105 | orchestrator | =============================================================================== 2026-04-11 05:03:44.976124 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.99s 2026-04-11 05:03:44.976168 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.08s 2026-04-11 05:03:44.976181 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.03s 2026-04-11 05:03:44.976192 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.25s 2026-04-11 05:03:44.976203 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.84s 2026-04-11 05:03:44.976214 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.72s 2026-04-11 05:03:44.976224 | orchestrator | module-load : Load modules ---------------------------------------------- 3.71s 2026-04-11 05:03:44.976235 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.52s 2026-04-11 05:03:44.976246 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.34s 2026-04-11 05:03:44.976288 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.28s 2026-04-11 05:03:44.976299 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.85s 2026-04-11 05:03:44.976311 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.76s 2026-04-11 05:03:44.976322 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.64s 2026-04-11 05:03:44.976333 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.63s 2026-04-11 05:03:44.976343 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.44s 2026-04-11 05:03:44.976354 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.36s 2026-04-11 05:03:44.976365 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.34s 2026-04-11 05:03:44.976376 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.31s 2026-04-11 05:03:44.976387 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.23s 2026-04-11 05:03:44.976398 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.83s 2026-04-11 05:03:45.160707 | orchestrator | + osism apply -a upgrade ovn 2026-04-11 05:03:46.473720 | orchestrator | 2026-04-11 05:03:46 | INFO  | Prepare task for execution of ovn. 2026-04-11 05:03:46.551720 | orchestrator | 2026-04-11 05:03:46 | INFO  | Task a4bab27d-1999-4da2-86c1-26b230c501e7 (ovn) was prepared for execution. 2026-04-11 05:03:46.551816 | orchestrator | 2026-04-11 05:03:46 | INFO  | It takes a moment until task a4bab27d-1999-4da2-86c1-26b230c501e7 (ovn) has been started and output is visible here. 2026-04-11 05:04:08.600665 | orchestrator | 2026-04-11 05:04:08.600763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 05:04:08.600773 | orchestrator | 2026-04-11 05:04:08.600781 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 05:04:08.600789 | orchestrator | Saturday 11 April 2026 05:03:51 +0000 (0:00:01.491) 0:00:01.491 ******** 2026-04-11 05:04:08.600797 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:04:08.600805 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:04:08.600812 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:04:08.600820 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:04:08.600827 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:04:08.600834 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:04:08.600841 | orchestrator | 2026-04-11 05:04:08.600848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 05:04:08.600855 | orchestrator | Saturday 11 April 2026 05:03:54 +0000 (0:00:03.488) 0:00:04.979 ******** 2026-04-11 05:04:08.600862 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-11 05:04:08.600888 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-11 05:04:08.600895 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-11 05:04:08.600902 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-11 05:04:08.600909 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-11 05:04:08.600916 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-11 05:04:08.600923 | orchestrator | 2026-04-11 05:04:08.600930 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-11 05:04:08.600936 | orchestrator | 2026-04-11 05:04:08.600943 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-11 05:04:08.600950 | orchestrator | Saturday 11 April 2026 05:03:57 +0000 (0:00:02.827) 0:00:07.808 ******** 2026-04-11 05:04:08.600958 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 05:04:08.600967 | orchestrator | 2026-04-11 05:04:08.600974 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-11 05:04:08.600981 | orchestrator | Saturday 11 April 2026 05:04:02 +0000 (0:00:04.842) 0:00:12.650 ******** 2026-04-11 05:04:08.600990 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.600999 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601006 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601013 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601020 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601052 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601065 | orchestrator | 2026-04-11 05:04:08.601072 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-11 05:04:08.601079 | orchestrator | Saturday 11 April 2026 05:04:05 +0000 (0:00:02.634) 0:00:15.285 ******** 2026-04-11 05:04:08.601086 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601093 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601100 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601108 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601115 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601122 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601129 | orchestrator | 2026-04-11 05:04:08.601136 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-11 05:04:08.601143 | orchestrator | Saturday 11 April 2026 05:04:07 +0000 (0:00:02.715) 0:00:18.001 ******** 2026-04-11 05:04:08.601150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601160 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:08.601175 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783344 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783460 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783478 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783491 | orchestrator | 2026-04-11 05:04:17.783504 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-11 05:04:17.783517 | orchestrator | Saturday 11 April 2026 05:04:10 +0000 (0:00:02.147) 0:00:20.148 ******** 2026-04-11 05:04:17.783529 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783542 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783554 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783565 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783635 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783665 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783695 | orchestrator | 2026-04-11 05:04:17.783707 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-11 05:04:17.783718 | orchestrator | Saturday 11 April 2026 05:04:13 +0000 (0:00:03.159) 0:00:23.308 ******** 2026-04-11 05:04:17.783731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783812 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:04:17.783844 | orchestrator | 2026-04-11 05:04:17.783861 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-11 05:04:17.783875 | orchestrator | Saturday 11 April 2026 05:04:15 +0000 (0:00:02.673) 0:00:25.982 ******** 2026-04-11 05:04:17.783889 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 05:04:17.783903 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:04:17.783916 | orchestrator | } 2026-04-11 05:04:17.783928 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 05:04:17.783941 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:04:17.783954 | orchestrator | } 2026-04-11 05:04:17.783967 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 05:04:17.783979 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:04:17.783992 | orchestrator | } 2026-04-11 05:04:17.784005 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 05:04:17.784018 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:04:17.784029 | orchestrator | } 2026-04-11 05:04:17.784040 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 05:04:17.784051 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:04:17.784062 | orchestrator | } 2026-04-11 05:04:17.784073 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 05:04:17.784083 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:04:17.784094 | orchestrator | } 2026-04-11 05:04:17.784105 | orchestrator | 2026-04-11 05:04:17.784116 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 05:04:17.784127 | orchestrator | Saturday 11 April 2026 05:04:17 +0000 (0:00:01.803) 0:00:27.785 ******** 2026-04-11 05:04:17.784149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:04:40.977437 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:04:40.977561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:04:40.977582 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:04:40.977594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:04:40.977606 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:04:40.977657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:04:40.977670 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:04:40.977681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:04:40.977713 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:04:40.977725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:04:40.977736 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:04:40.977747 | orchestrator | 2026-04-11 05:04:40.977758 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-11 05:04:40.977771 | orchestrator | Saturday 11 April 2026 05:04:20 +0000 (0:00:02.476) 0:00:30.262 ******** 2026-04-11 05:04:40.977783 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:04:40.977794 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:04:40.977805 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:04:40.977815 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:04:40.977826 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:04:40.977836 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:04:40.977847 | orchestrator | 2026-04-11 05:04:40.977858 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-11 05:04:40.977869 | orchestrator | Saturday 11 April 2026 05:04:23 +0000 (0:00:03.698) 0:00:33.960 ******** 2026-04-11 05:04:40.977880 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-11 05:04:40.977897 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-11 05:04:40.977907 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-11 05:04:40.977923 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-11 05:04:40.977942 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-11 05:04:40.977958 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-11 05:04:40.977969 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 05:04:40.977980 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 05:04:40.977990 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 05:04:40.978001 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 05:04:40.978012 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 05:04:40.978121 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-11 05:04:40.978142 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-11 05:04:40.978155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-11 05:04:40.978166 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-11 05:04:40.978177 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-11 05:04:40.978188 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-11 05:04:40.978209 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-11 05:04:40.978257 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 05:04:40.978277 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 05:04:40.978295 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 05:04:40.978314 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 05:04:40.978327 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 05:04:40.978338 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-11 05:04:40.978348 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 05:04:40.978359 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 05:04:40.978370 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 05:04:40.978380 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 05:04:40.978391 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 05:04:40.978409 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-11 05:04:40.978427 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 05:04:40.978443 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 05:04:40.978461 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 05:04:40.978478 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 05:04:40.978497 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 05:04:40.978515 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-11 05:04:40.978533 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-11 05:04:40.978551 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-11 05:04:40.978569 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-11 05:04:40.978587 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-11 05:04:40.978614 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-11 05:04:40.978634 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-11 05:04:40.978647 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-11 05:04:40.978660 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-11 05:04:40.978671 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-11 05:04:40.978682 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-11 05:04:40.978703 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-11 05:04:40.978726 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-11 05:07:33.508068 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-11 05:07:33.508259 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-11 05:07:33.508280 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-11 05:07:33.508292 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-11 05:07:33.508304 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-11 05:07:33.508315 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-11 05:07:33.508326 | orchestrator | 2026-04-11 05:07:33.508338 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 05:07:33.508349 | orchestrator | Saturday 11 April 2026 05:04:44 +0000 (0:00:20.382) 0:00:54.343 ******** 2026-04-11 05:07:33.508360 | orchestrator | 2026-04-11 05:07:33.508371 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 05:07:33.508382 | orchestrator | Saturday 11 April 2026 05:04:44 +0000 (0:00:00.428) 0:00:54.771 ******** 2026-04-11 05:07:33.508393 | orchestrator | 2026-04-11 05:07:33.508404 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 05:07:33.508415 | orchestrator | Saturday 11 April 2026 05:04:45 +0000 (0:00:00.452) 0:00:55.224 ******** 2026-04-11 05:07:33.508426 | orchestrator | 2026-04-11 05:07:33.508437 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 05:07:33.508448 | orchestrator | Saturday 11 April 2026 05:04:45 +0000 (0:00:00.607) 0:00:55.831 ******** 2026-04-11 05:07:33.508458 | orchestrator | 2026-04-11 05:07:33.508469 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 05:07:33.508480 | orchestrator | Saturday 11 April 2026 05:04:46 +0000 (0:00:00.450) 0:00:56.282 ******** 2026-04-11 05:07:33.508491 | orchestrator | 2026-04-11 05:07:33.508502 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-11 05:07:33.508513 | orchestrator | Saturday 11 April 2026 05:04:46 +0000 (0:00:00.438) 0:00:56.721 ******** 2026-04-11 05:07:33.508524 | orchestrator | 2026-04-11 05:07:33.508535 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-11 05:07:33.508546 | orchestrator | Saturday 11 April 2026 05:04:47 +0000 (0:00:00.806) 0:00:57.527 ******** 2026-04-11 05:07:33.508557 | orchestrator | 2026-04-11 05:07:33.508568 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-04-11 05:07:33.508580 | orchestrator | changed: [testbed-node-3] 2026-04-11 05:07:33.508593 | orchestrator | changed: [testbed-node-5] 2026-04-11 05:07:33.508606 | orchestrator | changed: [testbed-node-4] 2026-04-11 05:07:33.508619 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:07:33.508632 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:07:33.508645 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:07:33.508659 | orchestrator | 2026-04-11 05:07:33.508671 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-11 05:07:33.508685 | orchestrator | 2026-04-11 05:07:33.508699 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-11 05:07:33.508712 | orchestrator | Saturday 11 April 2026 05:06:59 +0000 (0:02:11.884) 0:03:09.412 ******** 2026-04-11 05:07:33.508725 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 05:07:33.508761 | orchestrator | 2026-04-11 05:07:33.508774 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-11 05:07:33.508787 | orchestrator | Saturday 11 April 2026 05:07:01 +0000 (0:00:01.858) 0:03:11.271 ******** 2026-04-11 05:07:33.508800 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 05:07:33.508812 | orchestrator | 2026-04-11 05:07:33.508825 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-11 05:07:33.508836 | orchestrator | Saturday 11 April 2026 05:07:03 +0000 (0:00:01.883) 0:03:13.155 ******** 2026-04-11 05:07:33.508850 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.508862 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.508876 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.508888 | orchestrator | 2026-04-11 05:07:33.508915 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-11 05:07:33.508926 | orchestrator | Saturday 11 April 2026 05:07:04 +0000 (0:00:01.858) 0:03:15.013 ******** 2026-04-11 05:07:33.508937 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.508948 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.508958 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.508969 | orchestrator | 2026-04-11 05:07:33.508980 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-11 05:07:33.508991 | orchestrator | Saturday 11 April 2026 05:07:06 +0000 (0:00:01.407) 0:03:16.421 ******** 2026-04-11 05:07:33.509002 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509013 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509024 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509034 | orchestrator | 2026-04-11 05:07:33.509045 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-11 05:07:33.509056 | orchestrator | Saturday 11 April 2026 05:07:07 +0000 (0:00:01.371) 0:03:17.793 ******** 2026-04-11 05:07:33.509066 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509077 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509087 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509098 | orchestrator | 2026-04-11 05:07:33.509109 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-11 05:07:33.509120 | orchestrator | Saturday 11 April 2026 05:07:09 +0000 (0:00:01.405) 0:03:19.198 ******** 2026-04-11 05:07:33.509131 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509188 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509200 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509211 | orchestrator | 2026-04-11 05:07:33.509222 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-11 05:07:33.509233 | orchestrator | Saturday 11 April 2026 05:07:10 +0000 (0:00:01.360) 0:03:20.559 ******** 2026-04-11 05:07:33.509244 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:07:33.509255 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:07:33.509265 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:07:33.509276 | orchestrator | 2026-04-11 05:07:33.509287 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-11 05:07:33.509297 | orchestrator | Saturday 11 April 2026 05:07:12 +0000 (0:00:01.582) 0:03:22.141 ******** 2026-04-11 05:07:33.509308 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509319 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509329 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509340 | orchestrator | 2026-04-11 05:07:33.509351 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-11 05:07:33.509361 | orchestrator | Saturday 11 April 2026 05:07:13 +0000 (0:00:01.855) 0:03:23.997 ******** 2026-04-11 05:07:33.509372 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509383 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509393 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509404 | orchestrator | 2026-04-11 05:07:33.509415 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-11 05:07:33.509425 | orchestrator | Saturday 11 April 2026 05:07:15 +0000 (0:00:01.485) 0:03:25.482 ******** 2026-04-11 05:07:33.509446 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509456 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509467 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509478 | orchestrator | 2026-04-11 05:07:33.509488 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-11 05:07:33.509499 | orchestrator | Saturday 11 April 2026 05:07:17 +0000 (0:00:01.869) 0:03:27.352 ******** 2026-04-11 05:07:33.509510 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509520 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509531 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509541 | orchestrator | 2026-04-11 05:07:33.509552 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-11 05:07:33.509563 | orchestrator | Saturday 11 April 2026 05:07:18 +0000 (0:00:01.585) 0:03:28.937 ******** 2026-04-11 05:07:33.509574 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:07:33.509584 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:07:33.509595 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:07:33.509606 | orchestrator | 2026-04-11 05:07:33.509617 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-11 05:07:33.509628 | orchestrator | Saturday 11 April 2026 05:07:20 +0000 (0:00:01.414) 0:03:30.352 ******** 2026-04-11 05:07:33.509639 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:07:33.509649 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:07:33.509660 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:07:33.509670 | orchestrator | 2026-04-11 05:07:33.509681 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-11 05:07:33.509692 | orchestrator | Saturday 11 April 2026 05:07:21 +0000 (0:00:01.369) 0:03:31.721 ******** 2026-04-11 05:07:33.509702 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509713 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509724 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509734 | orchestrator | 2026-04-11 05:07:33.509745 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-11 05:07:33.509756 | orchestrator | Saturday 11 April 2026 05:07:23 +0000 (0:00:02.019) 0:03:33.741 ******** 2026-04-11 05:07:33.509766 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509777 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509788 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509798 | orchestrator | 2026-04-11 05:07:33.509809 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-11 05:07:33.509820 | orchestrator | Saturday 11 April 2026 05:07:25 +0000 (0:00:01.484) 0:03:35.226 ******** 2026-04-11 05:07:33.509830 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509841 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509851 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509862 | orchestrator | 2026-04-11 05:07:33.509873 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-11 05:07:33.509884 | orchestrator | Saturday 11 April 2026 05:07:26 +0000 (0:00:01.830) 0:03:37.056 ******** 2026-04-11 05:07:33.509894 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:07:33.509905 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:07:33.509915 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:07:33.509926 | orchestrator | 2026-04-11 05:07:33.509937 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-11 05:07:33.509947 | orchestrator | Saturday 11 April 2026 05:07:28 +0000 (0:00:01.403) 0:03:38.460 ******** 2026-04-11 05:07:33.509964 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:07:33.509975 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:07:33.509986 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:07:33.509997 | orchestrator | 2026-04-11 05:07:33.510007 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-11 05:07:33.510080 | orchestrator | Saturday 11 April 2026 05:07:29 +0000 (0:00:01.561) 0:03:40.021 ******** 2026-04-11 05:07:33.510092 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:07:33.510103 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:07:33.510122 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:07:33.510132 | orchestrator | 2026-04-11 05:07:33.510174 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-11 05:07:33.510185 | orchestrator | Saturday 11 April 2026 05:07:31 +0000 (0:00:01.813) 0:03:41.835 ******** 2026-04-11 05:07:33.510207 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.743787 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.743902 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.743920 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.743935 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.743946 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.743988 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.744043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:07:39.744088 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.744108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:07:39.744125 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.744169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:07:39.744187 | orchestrator | 2026-04-11 05:07:39.744306 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-11 05:07:39.744334 | orchestrator | Saturday 11 April 2026 05:07:35 +0000 (0:00:03.902) 0:03:45.737 ******** 2026-04-11 05:07:39.744349 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.744363 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.744395 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.744409 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:39.744434 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.789759 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.789878 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.789896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:07:54.789909 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.789958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:07:54.789971 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.789982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:07:54.789994 | orchestrator | 2026-04-11 05:07:54.790008 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-11 05:07:54.790078 | orchestrator | Saturday 11 April 2026 05:07:41 +0000 (0:00:06.244) 0:03:51.982 ******** 2026-04-11 05:07:54.790091 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-11 05:07:54.790103 | orchestrator | 2026-04-11 05:07:54.790114 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-11 05:07:54.790125 | orchestrator | Saturday 11 April 2026 05:07:43 +0000 (0:00:01.906) 0:03:53.889 ******** 2026-04-11 05:07:54.790174 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:07:54.790193 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:07:54.790221 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:07:54.790232 | orchestrator | 2026-04-11 05:07:54.790244 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-11 05:07:54.790255 | orchestrator | Saturday 11 April 2026 05:07:45 +0000 (0:00:01.834) 0:03:55.724 ******** 2026-04-11 05:07:54.790266 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:07:54.790278 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:07:54.790289 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:07:54.790300 | orchestrator | 2026-04-11 05:07:54.790313 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-11 05:07:54.790326 | orchestrator | Saturday 11 April 2026 05:07:48 +0000 (0:00:02.832) 0:03:58.557 ******** 2026-04-11 05:07:54.790338 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:07:54.790352 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:07:54.790365 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:07:54.790376 | orchestrator | 2026-04-11 05:07:54.790387 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-11 05:07:54.790398 | orchestrator | Saturday 11 April 2026 05:07:51 +0000 (0:00:02.591) 0:04:01.149 ******** 2026-04-11 05:07:54.790410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.790434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.790446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.790476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.790488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.790500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:07:54.790520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:08:00.273705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.273838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:08:00.273854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.273878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:08:00.273890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.273901 | orchestrator | 2026-04-11 05:08:00.273914 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-11 05:08:00.273927 | orchestrator | Saturday 11 April 2026 05:07:56 +0000 (0:00:05.294) 0:04:06.444 ******** 2026-04-11 05:08:00.273939 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 05:08:00.273951 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:08:00.273962 | orchestrator | } 2026-04-11 05:08:00.273974 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 05:08:00.273985 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:08:00.273996 | orchestrator | } 2026-04-11 05:08:00.274006 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 05:08:00.274059 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:08:00.274073 | orchestrator | } 2026-04-11 05:08:00.274084 | orchestrator | 2026-04-11 05:08:00.274096 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 05:08:00.274107 | orchestrator | Saturday 11 April 2026 05:07:57 +0000 (0:00:01.475) 0:04:07.919 ******** 2026-04-11 05:08:00.274118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 05:08:00.274329 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 05:09:55.855963 | orchestrator | 2026-04-11 05:09:55.856168 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-11 05:09:55.856188 | orchestrator | Saturday 11 April 2026 05:08:01 +0000 (0:00:03.631) 0:04:11.551 ******** 2026-04-11 05:09:55.856201 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-11 05:09:55.856215 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-11 05:09:55.856226 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-11 05:09:55.856238 | orchestrator | 2026-04-11 05:09:55.856249 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-11 05:09:55.856262 | orchestrator | Saturday 11 April 2026 05:08:25 +0000 (0:00:23.581) 0:04:35.133 ******** 2026-04-11 05:09:55.856273 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 05:09:55.856284 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:09:55.856296 | orchestrator | } 2026-04-11 05:09:55.856307 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 05:09:55.856318 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:09:55.856329 | orchestrator | } 2026-04-11 05:09:55.856340 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 05:09:55.856351 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 05:09:55.856361 | orchestrator | } 2026-04-11 05:09:55.856372 | orchestrator | 2026-04-11 05:09:55.856384 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 05:09:55.856395 | orchestrator | Saturday 11 April 2026 05:08:26 +0000 (0:00:01.495) 0:04:36.629 ******** 2026-04-11 05:09:55.856406 | orchestrator | 2026-04-11 05:09:55.856417 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 05:09:55.856428 | orchestrator | Saturday 11 April 2026 05:08:26 +0000 (0:00:00.454) 0:04:37.083 ******** 2026-04-11 05:09:55.856439 | orchestrator | 2026-04-11 05:09:55.856451 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-11 05:09:55.856464 | orchestrator | Saturday 11 April 2026 05:08:27 +0000 (0:00:00.454) 0:04:37.538 ******** 2026-04-11 05:09:55.856478 | orchestrator | 2026-04-11 05:09:55.856490 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-11 05:09:55.856503 | orchestrator | Saturday 11 April 2026 05:08:28 +0000 (0:00:00.813) 0:04:38.351 ******** 2026-04-11 05:09:55.856516 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:09:55.856529 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:09:55.856564 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:09:55.856578 | orchestrator | 2026-04-11 05:09:55.856591 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-11 05:09:55.856604 | orchestrator | Saturday 11 April 2026 05:08:45 +0000 (0:00:16.802) 0:04:55.154 ******** 2026-04-11 05:09:55.856617 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:09:55.856630 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:09:55.856643 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:09:55.856655 | orchestrator | 2026-04-11 05:09:55.856669 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-11 05:09:55.856681 | orchestrator | Saturday 11 April 2026 05:09:01 +0000 (0:00:16.842) 0:05:11.996 ******** 2026-04-11 05:09:55.856694 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-11 05:09:55.856735 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-11 05:09:55.856749 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-11 05:09:55.856761 | orchestrator | 2026-04-11 05:09:55.856774 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-11 05:09:55.856786 | orchestrator | Saturday 11 April 2026 05:09:17 +0000 (0:00:15.837) 0:05:27.834 ******** 2026-04-11 05:09:55.856800 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:09:55.856813 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:09:55.856827 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:09:55.856840 | orchestrator | 2026-04-11 05:09:55.856852 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-11 05:09:55.856863 | orchestrator | Saturday 11 April 2026 05:09:34 +0000 (0:00:17.094) 0:05:44.929 ******** 2026-04-11 05:09:55.856874 | orchestrator | Pausing for 5 seconds 2026-04-11 05:09:55.856885 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:09:55.856897 | orchestrator | 2026-04-11 05:09:55.856907 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-11 05:09:55.856918 | orchestrator | Saturday 11 April 2026 05:09:41 +0000 (0:00:06.210) 0:05:51.139 ******** 2026-04-11 05:09:55.856929 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:09:55.856940 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:09:55.856951 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:09:55.856961 | orchestrator | 2026-04-11 05:09:55.856972 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-11 05:09:55.856983 | orchestrator | Saturday 11 April 2026 05:09:42 +0000 (0:00:01.829) 0:05:52.969 ******** 2026-04-11 05:09:55.856995 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:09:55.857005 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:09:55.857016 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:09:55.857027 | orchestrator | 2026-04-11 05:09:55.857038 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-11 05:09:55.857049 | orchestrator | Saturday 11 April 2026 05:09:45 +0000 (0:00:02.166) 0:05:55.136 ******** 2026-04-11 05:09:55.857060 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:09:55.857071 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:09:55.857082 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:09:55.857092 | orchestrator | 2026-04-11 05:09:55.857123 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-11 05:09:55.857135 | orchestrator | Saturday 11 April 2026 05:09:46 +0000 (0:00:01.824) 0:05:56.961 ******** 2026-04-11 05:09:55.857146 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:09:55.857156 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:09:55.857167 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:09:55.857178 | orchestrator | 2026-04-11 05:09:55.857189 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-11 05:09:55.857200 | orchestrator | Saturday 11 April 2026 05:09:48 +0000 (0:00:01.871) 0:05:58.832 ******** 2026-04-11 05:09:55.857211 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:09:55.857222 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:09:55.857232 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:09:55.857243 | orchestrator | 2026-04-11 05:09:55.857254 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-11 05:09:55.857285 | orchestrator | Saturday 11 April 2026 05:09:50 +0000 (0:00:01.893) 0:06:00.726 ******** 2026-04-11 05:09:55.857297 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:09:55.857307 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:09:55.857318 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:09:55.857329 | orchestrator | 2026-04-11 05:09:55.857339 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-11 05:09:55.857350 | orchestrator | Saturday 11 April 2026 05:09:52 +0000 (0:00:02.112) 0:06:02.839 ******** 2026-04-11 05:09:55.857361 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-11 05:09:55.857372 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-11 05:09:55.857383 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-11 05:09:55.857402 | orchestrator | 2026-04-11 05:09:55.857413 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 05:09:55.857425 | orchestrator | testbed-node-0 : ok=48  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-11 05:09:55.857438 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-11 05:09:55.857449 | orchestrator | testbed-node-2 : ok=49  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 05:09:55.857460 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 05:09:55.857471 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 05:09:55.857482 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 05:09:55.857493 | orchestrator | 2026-04-11 05:09:55.857504 | orchestrator | 2026-04-11 05:09:55.857520 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 05:09:55.857531 | orchestrator | Saturday 11 April 2026 05:09:55 +0000 (0:00:02.637) 0:06:05.476 ******** 2026-04-11 05:09:55.857542 | orchestrator | =============================================================================== 2026-04-11 05:09:55.857553 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.89s 2026-04-11 05:09:55.857564 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 23.58s 2026-04-11 05:09:55.857574 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.38s 2026-04-11 05:09:55.857585 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.10s 2026-04-11 05:09:55.857596 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.84s 2026-04-11 05:09:55.857607 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.80s 2026-04-11 05:09:55.857618 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 15.84s 2026-04-11 05:09:55.857629 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.24s 2026-04-11 05:09:55.857640 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.21s 2026-04-11 05:09:55.857650 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.29s 2026-04-11 05:09:55.857661 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 4.84s 2026-04-11 05:09:55.857672 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.90s 2026-04-11 05:09:55.857683 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.70s 2026-04-11 05:09:55.857693 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.63s 2026-04-11 05:09:55.857704 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.49s 2026-04-11 05:09:55.857715 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.18s 2026-04-11 05:09:55.857725 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.16s 2026-04-11 05:09:55.857736 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.83s 2026-04-11 05:09:55.857747 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.83s 2026-04-11 05:09:55.857758 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.72s 2026-04-11 05:09:56.074708 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-11 05:09:56.074822 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 05:09:56.074836 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-04-11 05:09:56.082293 | orchestrator | + set -e 2026-04-11 05:09:56.082384 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 05:09:56.082403 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 05:09:56.082420 | orchestrator | ++ INTERACTIVE=false 2026-04-11 05:09:56.082429 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 05:09:56.082437 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 05:09:56.082446 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-04-11 05:09:57.397915 | orchestrator | 2026-04-11 05:09:57 | INFO  | Prepare task for execution of ceph-rolling_update. 2026-04-11 05:09:57.465432 | orchestrator | 2026-04-11 05:09:57 | INFO  | Task 89129255-2437-4d8a-bd33-10680cc79f12 (ceph-rolling_update) was prepared for execution. 2026-04-11 05:09:57.465547 | orchestrator | 2026-04-11 05:09:57 | INFO  | It takes a moment until task 89129255-2437-4d8a-bd33-10680cc79f12 (ceph-rolling_update) has been started and output is visible here. 2026-04-11 05:11:22.741982 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-11 05:11:22.742166 | orchestrator | 2.16.14 2026-04-11 05:11:22.742183 | orchestrator | 2026-04-11 05:11:22.742194 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-04-11 05:11:22.742204 | orchestrator | 2026-04-11 05:11:22.742214 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-04-11 05:11:22.742224 | orchestrator | Saturday 11 April 2026 05:10:05 +0000 (0:00:01.935) 0:00:01.935 ******** 2026-04-11 05:11:22.742233 | orchestrator | skipping: [localhost] 2026-04-11 05:11:22.742242 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-04-11 05:11:22.742252 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-04-11 05:11:22.742261 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-04-11 05:11:22.742270 | orchestrator | 2026-04-11 05:11:22.742279 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-04-11 05:11:22.742287 | orchestrator | 2026-04-11 05:11:22.742296 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-04-11 05:11:22.742305 | orchestrator | Saturday 11 April 2026 05:10:08 +0000 (0:00:02.910) 0:00:04.845 ******** 2026-04-11 05:11:22.742314 | orchestrator | ok: [testbed-node-0] => { 2026-04-11 05:11:22.742323 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-11 05:11:22.742332 | orchestrator | } 2026-04-11 05:11:22.742341 | orchestrator | ok: [testbed-node-1] => { 2026-04-11 05:11:22.742350 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-11 05:11:22.742359 | orchestrator | } 2026-04-11 05:11:22.742367 | orchestrator | ok: [testbed-node-2] => { 2026-04-11 05:11:22.742376 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-11 05:11:22.742385 | orchestrator | } 2026-04-11 05:11:22.742393 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 05:11:22.742402 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-11 05:11:22.742411 | orchestrator | } 2026-04-11 05:11:22.742419 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 05:11:22.742428 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-11 05:11:22.742437 | orchestrator | } 2026-04-11 05:11:22.742459 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 05:11:22.742468 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-11 05:11:22.742477 | orchestrator | } 2026-04-11 05:11:22.742485 | orchestrator | ok: [testbed-manager] => { 2026-04-11 05:11:22.742494 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-11 05:11:22.742503 | orchestrator | } 2026-04-11 05:11:22.742511 | orchestrator | 2026-04-11 05:11:22.742520 | orchestrator | TASK [Gather facts] ************************************************************ 2026-04-11 05:11:22.742530 | orchestrator | Saturday 11 April 2026 05:10:14 +0000 (0:00:05.554) 0:00:10.399 ******** 2026-04-11 05:11:22.742540 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:22.742569 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:22.742579 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:22.742589 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:22.742598 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:22.742608 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:22.742618 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.742628 | orchestrator | 2026-04-11 05:11:22.742638 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-04-11 05:11:22.742648 | orchestrator | Saturday 11 April 2026 05:10:20 +0000 (0:00:06.385) 0:00:16.785 ******** 2026-04-11 05:11:22.742659 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:11:22.742669 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:11:22.742679 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:11:22.742688 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:11:22.742698 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:11:22.742708 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:11:22.742718 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:11:22.742728 | orchestrator | 2026-04-11 05:11:22.742738 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-04-11 05:11:22.742748 | orchestrator | Saturday 11 April 2026 05:10:52 +0000 (0:00:32.106) 0:00:48.891 ******** 2026-04-11 05:11:22.742758 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.742768 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.742777 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.742787 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.742796 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.742806 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.742816 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.742826 | orchestrator | 2026-04-11 05:11:22.742836 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:11:22.742846 | orchestrator | Saturday 11 April 2026 05:10:54 +0000 (0:00:02.127) 0:00:51.019 ******** 2026-04-11 05:11:22.742857 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-11 05:11:22.742868 | orchestrator | 2026-04-11 05:11:22.742878 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:11:22.742888 | orchestrator | Saturday 11 April 2026 05:10:57 +0000 (0:00:02.709) 0:00:53.729 ******** 2026-04-11 05:11:22.742897 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.742905 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.742914 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.742922 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.742931 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.742939 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.742948 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.742956 | orchestrator | 2026-04-11 05:11:22.742980 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:11:22.742990 | orchestrator | Saturday 11 April 2026 05:10:59 +0000 (0:00:02.468) 0:00:56.198 ******** 2026-04-11 05:11:22.742998 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.743007 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.743015 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.743024 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.743032 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.743041 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.743049 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.743058 | orchestrator | 2026-04-11 05:11:22.743066 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:11:22.743081 | orchestrator | Saturday 11 April 2026 05:11:01 +0000 (0:00:01.958) 0:00:58.156 ******** 2026-04-11 05:11:22.743090 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.743115 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.743124 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.743132 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.743141 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.743149 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.743158 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.743167 | orchestrator | 2026-04-11 05:11:22.743175 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:11:22.743184 | orchestrator | Saturday 11 April 2026 05:11:04 +0000 (0:00:02.759) 0:01:00.915 ******** 2026-04-11 05:11:22.743192 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.743201 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.743209 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.743218 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.743226 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.743234 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.743243 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.743252 | orchestrator | 2026-04-11 05:11:22.743260 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:11:22.743269 | orchestrator | Saturday 11 April 2026 05:11:06 +0000 (0:00:02.169) 0:01:03.085 ******** 2026-04-11 05:11:22.743277 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.743286 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.743294 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.743303 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.743311 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.743320 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.743333 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.743342 | orchestrator | 2026-04-11 05:11:22.743351 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:11:22.743359 | orchestrator | Saturday 11 April 2026 05:11:09 +0000 (0:00:02.180) 0:01:05.265 ******** 2026-04-11 05:11:22.743368 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.743376 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.743385 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.743393 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.743402 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.743410 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.743419 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.743427 | orchestrator | 2026-04-11 05:11:22.743436 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:11:22.743445 | orchestrator | Saturday 11 April 2026 05:11:11 +0000 (0:00:01.949) 0:01:07.215 ******** 2026-04-11 05:11:22.743453 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:22.743462 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:22.743470 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:22.743479 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:22.743487 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:22.743496 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:22.743504 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:11:22.743513 | orchestrator | 2026-04-11 05:11:22.743522 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:11:22.743530 | orchestrator | Saturday 11 April 2026 05:11:13 +0000 (0:00:02.247) 0:01:09.463 ******** 2026-04-11 05:11:22.743539 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.743548 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.743556 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.743565 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.743573 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.743582 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.743590 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.743599 | orchestrator | 2026-04-11 05:11:22.743608 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:11:22.743622 | orchestrator | Saturday 11 April 2026 05:11:15 +0000 (0:00:01.915) 0:01:11.378 ******** 2026-04-11 05:11:22.743631 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:11:22.743639 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:11:22.743648 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:11:22.743656 | orchestrator | 2026-04-11 05:11:22.743665 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:11:22.743674 | orchestrator | Saturday 11 April 2026 05:11:17 +0000 (0:00:01.932) 0:01:13.311 ******** 2026-04-11 05:11:22.743682 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:22.743691 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:22.743699 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:22.743707 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:22.743716 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:22.743725 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:22.743733 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:22.743742 | orchestrator | 2026-04-11 05:11:22.743750 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:11:22.743759 | orchestrator | Saturday 11 April 2026 05:11:19 +0000 (0:00:02.236) 0:01:15.547 ******** 2026-04-11 05:11:22.743767 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:11:22.743776 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:11:22.743785 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:11:22.743793 | orchestrator | 2026-04-11 05:11:22.743802 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:11:22.743811 | orchestrator | Saturday 11 April 2026 05:11:22 +0000 (0:00:03.248) 0:01:18.796 ******** 2026-04-11 05:11:22.743825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 05:11:45.886255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 05:11:45.886372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 05:11:45.886389 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:45.886402 | orchestrator | 2026-04-11 05:11:45.886415 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:11:45.886428 | orchestrator | Saturday 11 April 2026 05:11:24 +0000 (0:00:01.478) 0:01:20.274 ******** 2026-04-11 05:11:45.886441 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:11:45.886455 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:11:45.886467 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:11:45.886478 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:45.886489 | orchestrator | 2026-04-11 05:11:45.886500 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:11:45.886511 | orchestrator | Saturday 11 April 2026 05:11:25 +0000 (0:00:01.901) 0:01:22.176 ******** 2026-04-11 05:11:45.886539 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:11:45.886576 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:11:45.886588 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:11:45.886599 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:45.886610 | orchestrator | 2026-04-11 05:11:45.886621 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:11:45.886632 | orchestrator | Saturday 11 April 2026 05:11:27 +0000 (0:00:01.185) 0:01:23.361 ******** 2026-04-11 05:11:45.886645 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1b0d6fe4ad27', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:11:20.004326', 'end': '2026-04-11 05:11:20.061757', 'delta': '0:00:00.057431', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1b0d6fe4ad27'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:11:45.886681 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1a56ecc96cb4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:11:20.544483', 'end': '2026-04-11 05:11:20.586856', 'delta': '0:00:00.042373', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1a56ecc96cb4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:11:45.886694 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f023dde40a6c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:11:21.326625', 'end': '2026-04-11 05:11:21.376047', 'delta': '0:00:00.049422', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f023dde40a6c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:11:45.886705 | orchestrator | 2026-04-11 05:11:45.886716 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:11:45.886727 | orchestrator | Saturday 11 April 2026 05:11:28 +0000 (0:00:01.227) 0:01:24.589 ******** 2026-04-11 05:11:45.886738 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:45.886750 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:45.886763 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:45.886775 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:45.886796 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:45.886807 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:45.886820 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:45.886833 | orchestrator | 2026-04-11 05:11:45.886845 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:11:45.886859 | orchestrator | Saturday 11 April 2026 05:11:31 +0000 (0:00:02.710) 0:01:27.300 ******** 2026-04-11 05:11:45.886877 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:45.886889 | orchestrator | 2026-04-11 05:11:45.886901 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:11:45.886915 | orchestrator | Saturday 11 April 2026 05:11:32 +0000 (0:00:01.257) 0:01:28.557 ******** 2026-04-11 05:11:45.886927 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:45.886939 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:45.886952 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:45.886964 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:45.886976 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:45.886989 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:45.887001 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:45.887014 | orchestrator | 2026-04-11 05:11:45.887027 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:11:45.887040 | orchestrator | Saturday 11 April 2026 05:11:34 +0000 (0:00:02.078) 0:01:30.636 ******** 2026-04-11 05:11:45.887053 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:45.887065 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:11:45.887078 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:11:45.887090 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:11:45.887131 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:11:45.887143 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:11:45.887154 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-11 05:11:45.887165 | orchestrator | 2026-04-11 05:11:45.887176 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:11:45.887187 | orchestrator | Saturday 11 April 2026 05:11:37 +0000 (0:00:03.314) 0:01:33.950 ******** 2026-04-11 05:11:45.887198 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:11:45.887208 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:11:45.887219 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:11:45.887230 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:11:45.887240 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:11:45.887251 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:11:45.887262 | orchestrator | ok: [testbed-manager] 2026-04-11 05:11:45.887272 | orchestrator | 2026-04-11 05:11:45.887283 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:11:45.887294 | orchestrator | Saturday 11 April 2026 05:11:40 +0000 (0:00:02.278) 0:01:36.229 ******** 2026-04-11 05:11:45.887305 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:45.887316 | orchestrator | 2026-04-11 05:11:45.887327 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:11:45.887337 | orchestrator | Saturday 11 April 2026 05:11:41 +0000 (0:00:01.156) 0:01:37.385 ******** 2026-04-11 05:11:45.887348 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:45.887359 | orchestrator | 2026-04-11 05:11:45.887370 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:11:45.887380 | orchestrator | Saturday 11 April 2026 05:11:42 +0000 (0:00:01.211) 0:01:38.597 ******** 2026-04-11 05:11:45.887391 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:45.887402 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:45.887413 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:45.887424 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:45.887434 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:45.887445 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:45.887456 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:11:45.887474 | orchestrator | 2026-04-11 05:11:45.887485 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:11:45.887496 | orchestrator | Saturday 11 April 2026 05:11:45 +0000 (0:00:02.653) 0:01:41.250 ******** 2026-04-11 05:11:45.887507 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:45.887517 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:45.887528 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:45.887539 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:45.887550 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:45.887560 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:45.887579 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:11:57.903005 | orchestrator | 2026-04-11 05:11:57.903216 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:11:57.903249 | orchestrator | Saturday 11 April 2026 05:11:47 +0000 (0:00:01.991) 0:01:43.242 ******** 2026-04-11 05:11:57.903270 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:57.903291 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:57.903309 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:57.903327 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:57.903345 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:57.903363 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:57.903382 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:11:57.903400 | orchestrator | 2026-04-11 05:11:57.903418 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:11:57.903437 | orchestrator | Saturday 11 April 2026 05:11:49 +0000 (0:00:02.196) 0:01:45.438 ******** 2026-04-11 05:11:57.903456 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:57.903474 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:57.903492 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:57.903510 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:57.903529 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:57.903549 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:57.903568 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:11:57.903587 | orchestrator | 2026-04-11 05:11:57.903606 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:11:57.903626 | orchestrator | Saturday 11 April 2026 05:11:51 +0000 (0:00:01.974) 0:01:47.413 ******** 2026-04-11 05:11:57.903645 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:57.903664 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:57.903683 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:57.903702 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:57.903720 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:57.903738 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:57.903758 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:11:57.903777 | orchestrator | 2026-04-11 05:11:57.903795 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:11:57.903836 | orchestrator | Saturday 11 April 2026 05:11:53 +0000 (0:00:02.168) 0:01:49.582 ******** 2026-04-11 05:11:57.903856 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:57.903874 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:57.903893 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:57.903911 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:57.903929 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:57.903947 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:57.903965 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:11:57.903984 | orchestrator | 2026-04-11 05:11:57.904003 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:11:57.904019 | orchestrator | Saturday 11 April 2026 05:11:55 +0000 (0:00:02.143) 0:01:51.725 ******** 2026-04-11 05:11:57.904030 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:57.904041 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:57.904051 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:57.904062 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:57.904118 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:11:57.904130 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:11:57.904140 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:11:57.904151 | orchestrator | 2026-04-11 05:11:57.904162 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:11:57.904172 | orchestrator | Saturday 11 April 2026 05:11:57 +0000 (0:00:02.134) 0:01:53.860 ******** 2026-04-11 05:11:57.904186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:57.904202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:57.904213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:57.904250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:11:57.904264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:57.904276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:57.904287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:57.904310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4dd7cb49', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:11:57.904348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:57.904382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156824 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:11:58.156832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:11:58.156838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c2a3b65', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:11:58.156877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.156898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:11:58.156906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.361718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.361803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.361850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e1b70df', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:11:58.361863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.361870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.361878 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:11:58.361900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.361907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'uuids': ['5687e399-36a2-4cfe-ae2f-5c9610714106'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG']}})  2026-04-11 05:11:58.361925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d9c4f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:11:58.361934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003']}})  2026-04-11 05:11:58.361942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.361949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.361957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:11:58.361965 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:11:58.361978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.520861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ', 'dm-uuid-CRYPT-LUKS2-4ce930e6d90647c5bf5f978d8b977bd0-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:11:58.520977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.520988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'uuids': ['4ce930e6-d906-47c5-bf5f-978d8b977bd0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ']}})  2026-04-11 05:11:58.520997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200']}})  2026-04-11 05:11:58.521004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.521012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.521032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'uuids': ['9d724d10-77ae-4967-ad2d-00bd58cf4b58'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E']}})  2026-04-11 05:11:58.521056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f54fce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:11:58.521065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7ad0a670', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:11:58.521073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855']}})  2026-04-11 05:11:58.521080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.521141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:11:58.651460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG', 'dm-uuid-CRYPT-LUKS2-5687e39936a24cfeae2f5c9610714106-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh', 'dm-uuid-CRYPT-LUKS2-f995fcc5d8e74f9b8df633437ec8101a-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'uuids': ['f995fcc5-d8e7-4f9b-8df6-33437ec8101a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh']}})  2026-04-11 05:11:58.651612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2']}})  2026-04-11 05:11:58.651625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '122e9594', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:11:58.651667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.651722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E', 'dm-uuid-CRYPT-LUKS2-9d724d1077ae4967ad2d00bd58cf4b58-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'uuids': ['9614ebde-9763-41b8-8070-f8f6acc1ef2b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn']}})  2026-04-11 05:11:58.824401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '17a8d280', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:11:58.824408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412']}})  2026-04-11 05:11:58.824427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:11:58.824453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ', 'dm-uuid-CRYPT-LUKS2-bdcb2384073e4d9c84ce45a3274a4645-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'uuids': ['bdcb2384-073e-4d9c-84ce-45a3274a4645'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ']}})  2026-04-11 05:11:58.824473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056']}})  2026-04-11 05:11:58.824478 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:11:58.824482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:11:58.824496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a75c226', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:12:00.432879 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:12:00.432986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn', 'dm-uuid-CRYPT-LUKS2-9614ebde976341b88070f8f6acc1ef2b-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433059 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:12:00.433070 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433081 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433156 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433171 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-56-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:12:00.433182 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433211 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433236 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cbafe9d3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part16', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part14', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part15', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part1', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:12:00.433285 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433297 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:12:00.433307 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:12:00.433317 | orchestrator | 2026-04-11 05:12:00.433328 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:12:00.433340 | orchestrator | Saturday 11 April 2026 05:12:00 +0000 (0:00:02.406) 0:01:56.266 ******** 2026-04-11 05:12:00.433364 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.607893 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608079 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608170 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608189 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608202 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608273 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4dd7cb49', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608294 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608307 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608319 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:12:00.608333 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.608360 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.859969 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860190 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860242 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860263 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860281 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860366 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c2a3b65', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860401 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860423 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860454 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:12:00.860478 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860500 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:00.860533 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.015809 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.015942 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.015969 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.016014 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.016068 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e1b70df', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.016161 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.016183 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.016206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.016220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'uuids': ['5687e399-36a2-4cfe-ae2f-5c9610714106'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.016245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d9c4f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153667 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:12:01.153685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ', 'dm-uuid-CRYPT-LUKS2-4ce930e6d90647c5bf5f978d8b977bd0-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'uuids': ['4ce930e6-d906-47c5-bf5f-978d8b977bd0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.153865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f54fce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292823 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'uuids': ['9d724d10-77ae-4967-ad2d-00bd58cf4b58'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG', 'dm-uuid-CRYPT-LUKS2-5687e39936a24cfeae2f5c9610714106-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7ad0a670', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292975 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.292987 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:12:01.293013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.293036 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh', 'dm-uuid-CRYPT-LUKS2-f995fcc5d8e74f9b8df633437ec8101a-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'uuids': ['f995fcc5-d8e7-4f9b-8df6-33437ec8101a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403728 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403774 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403853 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '122e9594', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403902 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403944 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E', 'dm-uuid-CRYPT-LUKS2-9d724d1077ae4967ad2d00bd58cf4b58-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.403967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.404001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'uuids': ['9614ebde-9763-41b8-8070-f8f6acc1ef2b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '17a8d280', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530621 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:12:01.530634 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530676 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ', 'dm-uuid-CRYPT-LUKS2-bdcb2384073e4d9c84ce45a3274a4645-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530699 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530734 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530745 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'uuids': ['bdcb2384-073e-4d9c-84ce-45a3274a4645'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.530766 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592739 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-56-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592752 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592761 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592770 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a75c226', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592822 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:01.592845 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cbafe9d3', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part16', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part14', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part15', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part1', 'scsi-SQEMU_QEMU_HARDDISK_cbafe9d3-7c35-4bd1-ae60-dc778a424d68-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:20.620061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:20.620208 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:20.620222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn', 'dm-uuid-CRYPT-LUKS2-9614ebde976341b88070f8f6acc1ef2b-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:20.620231 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:12:20.620240 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:12:20.620251 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:12:20.620259 | orchestrator | 2026-04-11 05:12:20.620268 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:12:20.620277 | orchestrator | Saturday 11 April 2026 05:12:02 +0000 (0:00:02.711) 0:01:58.977 ******** 2026-04-11 05:12:20.620285 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:12:20.620294 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:12:20.620302 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:12:20.620310 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:12:20.620318 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:12:20.620345 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:12:20.620353 | orchestrator | ok: [testbed-manager] 2026-04-11 05:12:20.620361 | orchestrator | 2026-04-11 05:12:20.620369 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:12:20.620377 | orchestrator | Saturday 11 April 2026 05:12:05 +0000 (0:00:02.550) 0:02:01.528 ******** 2026-04-11 05:12:20.620385 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:12:20.620393 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:12:20.620400 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:12:20.620408 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:12:20.620416 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:12:20.620424 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:12:20.620432 | orchestrator | ok: [testbed-manager] 2026-04-11 05:12:20.620440 | orchestrator | 2026-04-11 05:12:20.620447 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:12:20.620455 | orchestrator | Saturday 11 April 2026 05:12:07 +0000 (0:00:02.166) 0:02:03.694 ******** 2026-04-11 05:12:20.620463 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:12:20.620471 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:12:20.620478 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:12:20.620486 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:12:20.620494 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:12:20.620502 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:12:20.620510 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:12:20.620518 | orchestrator | 2026-04-11 05:12:20.620526 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:12:20.620534 | orchestrator | Saturday 11 April 2026 05:12:10 +0000 (0:00:02.529) 0:02:06.224 ******** 2026-04-11 05:12:20.620542 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:12:20.620550 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:12:20.620572 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:12:20.620581 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:12:20.620590 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:12:20.620600 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:12:20.620609 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:12:20.620618 | orchestrator | 2026-04-11 05:12:20.620627 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:12:20.620637 | orchestrator | Saturday 11 April 2026 05:12:12 +0000 (0:00:01.992) 0:02:08.217 ******** 2026-04-11 05:12:20.620652 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:12:20.620662 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:12:20.620671 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:12:20.620680 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:12:20.620689 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:12:20.620698 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:12:20.620708 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-04-11 05:12:20.620718 | orchestrator | 2026-04-11 05:12:20.620727 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:12:20.620736 | orchestrator | Saturday 11 April 2026 05:12:14 +0000 (0:00:02.617) 0:02:10.834 ******** 2026-04-11 05:12:20.620745 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:12:20.620755 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:12:20.620764 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:12:20.620773 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:12:20.620782 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:12:20.620791 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:12:20.620800 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:12:20.620809 | orchestrator | 2026-04-11 05:12:20.620818 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:12:20.620828 | orchestrator | Saturday 11 April 2026 05:12:16 +0000 (0:00:01.925) 0:02:12.760 ******** 2026-04-11 05:12:20.620837 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:12:20.620846 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-11 05:12:20.620861 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 05:12:20.620870 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-11 05:12:20.620879 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:12:20.620889 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-11 05:12:20.620898 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-11 05:12:20.620907 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-11 05:12:20.620917 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 05:12:20.620926 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:12:20.620936 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-11 05:12:20.620945 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-11 05:12:20.620953 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-11 05:12:20.620961 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-11 05:12:20.620969 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-11 05:12:20.620977 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-11 05:12:20.620984 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-11 05:12:20.620992 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-11 05:12:20.621000 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-11 05:12:20.621008 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-11 05:12:20.621016 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-11 05:12:20.621024 | orchestrator | 2026-04-11 05:12:20.621032 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:12:20.621040 | orchestrator | Saturday 11 April 2026 05:12:19 +0000 (0:00:03.197) 0:02:15.957 ******** 2026-04-11 05:12:20.621049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 05:12:20.621057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 05:12:20.621065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 05:12:20.621073 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:12:20.621081 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 05:12:20.621089 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 05:12:20.621119 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 05:12:20.621127 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:12:20.621135 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 05:12:20.621143 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 05:12:20.621151 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 05:12:20.621159 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:12:20.621167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 05:12:20.621175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 05:12:20.621183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 05:12:20.621191 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:12:20.621199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 05:12:20.621207 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 05:12:20.621214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 05:12:20.621222 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:12:20.621230 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 05:12:20.621238 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 05:12:20.621246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 05:12:20.621254 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:12:20.621262 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-11 05:12:20.621281 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-11 05:13:08.245476 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-11 05:13:08.245591 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:08.245607 | orchestrator | 2026-04-11 05:13:08.245620 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:13:08.245632 | orchestrator | Saturday 11 April 2026 05:12:21 +0000 (0:00:02.023) 0:02:17.981 ******** 2026-04-11 05:13:08.245643 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:08.245655 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:08.245682 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:08.245694 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:08.245706 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 05:13:08.245718 | orchestrator | 2026-04-11 05:13:08.245730 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:13:08.245743 | orchestrator | Saturday 11 April 2026 05:12:23 +0000 (0:00:02.091) 0:02:20.072 ******** 2026-04-11 05:13:08.245754 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.245765 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:08.245776 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:08.245788 | orchestrator | 2026-04-11 05:13:08.245799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:13:08.245810 | orchestrator | Saturday 11 April 2026 05:12:25 +0000 (0:00:01.443) 0:02:21.516 ******** 2026-04-11 05:13:08.245822 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.245833 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:08.245844 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:08.245855 | orchestrator | 2026-04-11 05:13:08.245866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:13:08.245878 | orchestrator | Saturday 11 April 2026 05:12:26 +0000 (0:00:01.442) 0:02:22.958 ******** 2026-04-11 05:13:08.245889 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.245900 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:08.245911 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:08.245922 | orchestrator | 2026-04-11 05:13:08.245933 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:13:08.245944 | orchestrator | Saturday 11 April 2026 05:12:28 +0000 (0:00:01.364) 0:02:24.323 ******** 2026-04-11 05:13:08.245955 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:13:08.245967 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:13:08.245978 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:13:08.245989 | orchestrator | 2026-04-11 05:13:08.246003 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:13:08.246071 | orchestrator | Saturday 11 April 2026 05:12:29 +0000 (0:00:01.474) 0:02:25.797 ******** 2026-04-11 05:13:08.246087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:13:08.246130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:13:08.246150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:13:08.246170 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.246209 | orchestrator | 2026-04-11 05:13:08.246231 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:13:08.246250 | orchestrator | Saturday 11 April 2026 05:12:30 +0000 (0:00:01.407) 0:02:27.205 ******** 2026-04-11 05:13:08.246268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:13:08.246287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:13:08.246305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:13:08.246324 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.246342 | orchestrator | 2026-04-11 05:13:08.246362 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:13:08.246411 | orchestrator | Saturday 11 April 2026 05:12:32 +0000 (0:00:01.680) 0:02:28.885 ******** 2026-04-11 05:13:08.246428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:13:08.246440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:13:08.246451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:13:08.246461 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.246472 | orchestrator | 2026-04-11 05:13:08.246483 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:13:08.246494 | orchestrator | Saturday 11 April 2026 05:12:34 +0000 (0:00:01.700) 0:02:30.586 ******** 2026-04-11 05:13:08.246505 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:13:08.246515 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:13:08.246526 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:13:08.246537 | orchestrator | 2026-04-11 05:13:08.246548 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:13:08.246558 | orchestrator | Saturday 11 April 2026 05:12:36 +0000 (0:00:01.658) 0:02:32.244 ******** 2026-04-11 05:13:08.246570 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 05:13:08.246581 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 05:13:08.246591 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 05:13:08.246602 | orchestrator | 2026-04-11 05:13:08.246613 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:13:08.246624 | orchestrator | Saturday 11 April 2026 05:12:37 +0000 (0:00:01.530) 0:02:33.774 ******** 2026-04-11 05:13:08.246635 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:13:08.246646 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:13:08.246657 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:13:08.246668 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:13:08.246679 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:13:08.246709 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:13:08.246721 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:13:08.246731 | orchestrator | 2026-04-11 05:13:08.246742 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:13:08.246753 | orchestrator | Saturday 11 April 2026 05:12:39 +0000 (0:00:01.836) 0:02:35.611 ******** 2026-04-11 05:13:08.246763 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:13:08.246782 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:13:08.246793 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:13:08.246804 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:13:08.246815 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:13:08.246825 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:13:08.246836 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:13:08.246846 | orchestrator | 2026-04-11 05:13:08.246857 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-04-11 05:13:08.246868 | orchestrator | Saturday 11 April 2026 05:12:42 +0000 (0:00:03.011) 0:02:38.623 ******** 2026-04-11 05:13:08.246878 | orchestrator | changed: [testbed-node-3] 2026-04-11 05:13:08.246889 | orchestrator | changed: [testbed-node-5] 2026-04-11 05:13:08.246900 | orchestrator | changed: [testbed-manager] 2026-04-11 05:13:08.246911 | orchestrator | changed: [testbed-node-4] 2026-04-11 05:13:08.246921 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:13:08.246932 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:13:08.246951 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:13:08.246962 | orchestrator | 2026-04-11 05:13:08.246973 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-04-11 05:13:08.246983 | orchestrator | Saturday 11 April 2026 05:12:53 +0000 (0:00:10.950) 0:02:49.574 ******** 2026-04-11 05:13:08.246994 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:08.247005 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:08.247015 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:08.247026 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.247036 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:08.247047 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:08.247057 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:08.247068 | orchestrator | 2026-04-11 05:13:08.247079 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-04-11 05:13:08.247117 | orchestrator | Saturday 11 April 2026 05:12:55 +0000 (0:00:02.146) 0:02:51.721 ******** 2026-04-11 05:13:08.247134 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:08.247145 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:08.247156 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:08.247167 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.247177 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:08.247188 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:08.247198 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:08.247209 | orchestrator | 2026-04-11 05:13:08.247220 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-04-11 05:13:08.247231 | orchestrator | Saturday 11 April 2026 05:12:57 +0000 (0:00:01.900) 0:02:53.621 ******** 2026-04-11 05:13:08.247242 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:08.247252 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:13:08.247263 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:13:08.247274 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:13:08.247285 | orchestrator | changed: [testbed-node-3] 2026-04-11 05:13:08.247296 | orchestrator | changed: [testbed-node-4] 2026-04-11 05:13:08.247306 | orchestrator | changed: [testbed-node-5] 2026-04-11 05:13:08.247317 | orchestrator | 2026-04-11 05:13:08.247328 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-04-11 05:13:08.247339 | orchestrator | Saturday 11 April 2026 05:13:00 +0000 (0:00:03.127) 0:02:56.749 ******** 2026-04-11 05:13:08.247351 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-11 05:13:08.247363 | orchestrator | 2026-04-11 05:13:08.247373 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-04-11 05:13:08.247384 | orchestrator | Saturday 11 April 2026 05:13:03 +0000 (0:00:02.902) 0:02:59.652 ******** 2026-04-11 05:13:08.247395 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:08.247406 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:08.247417 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:08.247427 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.247438 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:08.247449 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:08.247459 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:08.247470 | orchestrator | 2026-04-11 05:13:08.247481 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-04-11 05:13:08.247492 | orchestrator | Saturday 11 April 2026 05:13:05 +0000 (0:00:01.924) 0:03:01.576 ******** 2026-04-11 05:13:08.247503 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:08.247513 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:08.247524 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:08.247535 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.247545 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:08.247608 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:08.247627 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:08.247638 | orchestrator | 2026-04-11 05:13:08.247648 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-04-11 05:13:08.247659 | orchestrator | Saturday 11 April 2026 05:13:07 +0000 (0:00:02.118) 0:03:03.695 ******** 2026-04-11 05:13:08.247670 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:08.247681 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:08.247692 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:08.247702 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:08.247722 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.039457 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.039573 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.039590 | orchestrator | 2026-04-11 05:13:46.039602 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-04-11 05:13:46.039613 | orchestrator | Saturday 11 April 2026 05:13:10 +0000 (0:00:02.525) 0:03:06.221 ******** 2026-04-11 05:13:46.039623 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.039633 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.039657 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.039667 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.039677 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.039687 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.039697 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.039706 | orchestrator | 2026-04-11 05:13:46.039716 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-04-11 05:13:46.039727 | orchestrator | Saturday 11 April 2026 05:13:12 +0000 (0:00:02.208) 0:03:08.429 ******** 2026-04-11 05:13:46.039736 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.039746 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.039756 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.039766 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.039776 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.039786 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.039795 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.039805 | orchestrator | 2026-04-11 05:13:46.039815 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-04-11 05:13:46.039825 | orchestrator | Saturday 11 April 2026 05:13:14 +0000 (0:00:02.137) 0:03:10.567 ******** 2026-04-11 05:13:46.039835 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.039845 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.039854 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.039864 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.039874 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.039883 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.039893 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.039903 | orchestrator | 2026-04-11 05:13:46.039913 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-04-11 05:13:46.039923 | orchestrator | Saturday 11 April 2026 05:13:16 +0000 (0:00:02.226) 0:03:12.794 ******** 2026-04-11 05:13:46.039933 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.039942 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.039952 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.039961 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.039971 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.039981 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.039990 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040000 | orchestrator | 2026-04-11 05:13:46.040012 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-04-11 05:13:46.040023 | orchestrator | Saturday 11 April 2026 05:13:18 +0000 (0:00:02.130) 0:03:14.924 ******** 2026-04-11 05:13:46.040035 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040047 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040059 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.040119 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.040134 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.040146 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.040155 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040165 | orchestrator | 2026-04-11 05:13:46.040175 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-04-11 05:13:46.040185 | orchestrator | Saturday 11 April 2026 05:13:21 +0000 (0:00:02.311) 0:03:17.236 ******** 2026-04-11 05:13:46.040194 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040204 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040214 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.040223 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.040233 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.040243 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.040252 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040262 | orchestrator | 2026-04-11 05:13:46.040271 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-04-11 05:13:46.040281 | orchestrator | Saturday 11 April 2026 05:13:23 +0000 (0:00:02.272) 0:03:19.508 ******** 2026-04-11 05:13:46.040291 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040301 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040310 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.040320 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.040329 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.040339 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.040348 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040358 | orchestrator | 2026-04-11 05:13:46.040368 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-04-11 05:13:46.040378 | orchestrator | Saturday 11 April 2026 05:13:25 +0000 (0:00:02.167) 0:03:21.676 ******** 2026-04-11 05:13:46.040387 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040397 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040406 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.040416 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.040425 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.040435 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.040444 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040454 | orchestrator | 2026-04-11 05:13:46.040463 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-04-11 05:13:46.040473 | orchestrator | Saturday 11 April 2026 05:13:27 +0000 (0:00:02.197) 0:03:23.873 ******** 2026-04-11 05:13:46.040483 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040492 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040502 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.040511 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.040521 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.040530 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.040540 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040550 | orchestrator | 2026-04-11 05:13:46.040559 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-04-11 05:13:46.040585 | orchestrator | Saturday 11 April 2026 05:13:29 +0000 (0:00:02.181) 0:03:26.055 ******** 2026-04-11 05:13:46.040596 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040605 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040615 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.040625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 05:13:46.040642 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 05:13:46.040652 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.040669 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 05:13:46.040679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 05:13:46.040689 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.040699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 05:13:46.040709 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 05:13:46.040718 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.040728 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040737 | orchestrator | 2026-04-11 05:13:46.040747 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-04-11 05:13:46.040757 | orchestrator | Saturday 11 April 2026 05:13:32 +0000 (0:00:02.343) 0:03:28.399 ******** 2026-04-11 05:13:46.040766 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040776 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040785 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.040795 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.040804 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.040813 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.040823 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040832 | orchestrator | 2026-04-11 05:13:46.040842 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-04-11 05:13:46.040851 | orchestrator | Saturday 11 April 2026 05:13:34 +0000 (0:00:02.126) 0:03:30.525 ******** 2026-04-11 05:13:46.040861 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040870 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040880 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.040889 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.040898 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.040908 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.040917 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.040927 | orchestrator | 2026-04-11 05:13:46.040937 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-04-11 05:13:46.040946 | orchestrator | Saturday 11 April 2026 05:13:36 +0000 (0:00:02.292) 0:03:32.818 ******** 2026-04-11 05:13:46.040977 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.040987 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.040997 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.041007 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.041016 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.041025 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.041035 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.041044 | orchestrator | 2026-04-11 05:13:46.041054 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-04-11 05:13:46.041064 | orchestrator | Saturday 11 April 2026 05:13:38 +0000 (0:00:02.200) 0:03:35.019 ******** 2026-04-11 05:13:46.041073 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.041083 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.041114 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.041125 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.041135 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.041144 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.041154 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.041163 | orchestrator | 2026-04-11 05:13:46.041173 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-04-11 05:13:46.041182 | orchestrator | Saturday 11 April 2026 05:13:41 +0000 (0:00:02.462) 0:03:37.481 ******** 2026-04-11 05:13:46.041199 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.041208 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.041218 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.041227 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.041237 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.041246 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.041255 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.041265 | orchestrator | 2026-04-11 05:13:46.041274 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-04-11 05:13:46.041284 | orchestrator | Saturday 11 April 2026 05:13:43 +0000 (0:00:02.253) 0:03:39.735 ******** 2026-04-11 05:13:46.041294 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.041303 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:13:46.041312 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:13:46.041322 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:13:46.041331 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:13:46.041341 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:13:46.041350 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:13:46.041360 | orchestrator | 2026-04-11 05:13:46.041369 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-04-11 05:13:46.041379 | orchestrator | Saturday 11 April 2026 05:13:45 +0000 (0:00:02.257) 0:03:41.992 ******** 2026-04-11 05:13:46.041389 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:13:46.041405 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:00.715212 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:00.715359 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:00.715375 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 05:14:00.715384 | orchestrator | 2026-04-11 05:14:00.715393 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-04-11 05:14:00.715422 | orchestrator | Saturday 11 April 2026 05:13:48 +0000 (0:00:02.440) 0:03:44.433 ******** 2026-04-11 05:14:00.715431 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:14:00.715441 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:14:00.715449 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:14:00.715457 | orchestrator | 2026-04-11 05:14:00.715464 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-04-11 05:14:00.715473 | orchestrator | Saturday 11 April 2026 05:13:49 +0000 (0:00:01.456) 0:03:45.890 ******** 2026-04-11 05:14:00.715483 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 05:14:00.715493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 05:14:00.715501 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:00.715508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 05:14:00.715516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 05:14:00.715524 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:00.715532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 05:14:00.715540 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 05:14:00.715548 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:00.715556 | orchestrator | 2026-04-11 05:14:00.715564 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-04-11 05:14:00.715571 | orchestrator | Saturday 11 April 2026 05:13:51 +0000 (0:00:01.410) 0:03:47.300 ******** 2026-04-11 05:14:00.715606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:00.715616 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:00.715624 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:00.715632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:00.715640 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:00.715648 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:00.715655 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:00.715663 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:00.715670 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:00.715677 | orchestrator | 2026-04-11 05:14:00.715686 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-04-11 05:14:00.715716 | orchestrator | Saturday 11 April 2026 05:13:52 +0000 (0:00:01.410) 0:03:48.711 ******** 2026-04-11 05:14:00.715724 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:00.715732 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:00.715741 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:00.715749 | orchestrator | 2026-04-11 05:14:00.715756 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-04-11 05:14:00.715771 | orchestrator | Saturday 11 April 2026 05:13:54 +0000 (0:00:01.509) 0:03:50.220 ******** 2026-04-11 05:14:00.715779 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:00.715787 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:00.715796 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:00.715804 | orchestrator | 2026-04-11 05:14:00.715812 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-04-11 05:14:00.715820 | orchestrator | Saturday 11 April 2026 05:13:55 +0000 (0:00:01.440) 0:03:51.661 ******** 2026-04-11 05:14:00.715828 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:00.715836 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:00.715844 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:00.715852 | orchestrator | 2026-04-11 05:14:00.715860 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-04-11 05:14:00.715868 | orchestrator | Saturday 11 April 2026 05:13:56 +0000 (0:00:01.346) 0:03:53.008 ******** 2026-04-11 05:14:00.715876 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:00.715890 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:00.715897 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:00.715904 | orchestrator | 2026-04-11 05:14:00.715911 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-04-11 05:14:00.715919 | orchestrator | Saturday 11 April 2026 05:13:58 +0000 (0:00:01.360) 0:03:54.369 ******** 2026-04-11 05:14:00.715926 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}) 2026-04-11 05:14:00.715935 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}) 2026-04-11 05:14:00.715942 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}) 2026-04-11 05:14:00.715949 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}) 2026-04-11 05:14:00.715957 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}) 2026-04-11 05:14:00.715964 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}) 2026-04-11 05:14:00.715971 | orchestrator | 2026-04-11 05:14:00.715978 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-04-11 05:14:00.715987 | orchestrator | Saturday 11 April 2026 05:14:00 +0000 (0:00:02.440) 0:03:56.809 ******** 2026-04-11 05:14:00.716000 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-c5955808-db0e-564c-b1b7-e2d336084003/osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1775876380.4174974, 'mtime': 1775876380.4124973, 'ctime': 1775876380.4124973, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-c5955808-db0e-564c-b1b7-e2d336084003/osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:00.716028 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200/osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1775876401.025813, 'mtime': 1775876401.021813, 'ctime': 1775876401.021813, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200/osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817407 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:03.817525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-4afe3055-abd0-5615-b44c-a776d8127855/osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1775876380.7105572, 'mtime': 1775876380.705557, 'ctime': 1775876380.705557, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-4afe3055-abd0-5615-b44c-a776d8127855/osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2/osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1775876401.8268976, 'mtime': 1775876401.8223195, 'ctime': 1775876401.8223195, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2/osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817554 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:03.817583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412/osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1775876379.7772822, 'mtime': 1775876379.7722824, 'ctime': 1775876379.7722824, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412/osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817639 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-a718c651-a264-5d59-a3a1-3dddb23bb056/osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1775876398.5156045, 'mtime': 1775876398.5116045, 'ctime': 1775876398.5116045, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-a718c651-a264-5d59-a3a1-3dddb23bb056/osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817651 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:03.817660 | orchestrator | 2026-04-11 05:14:03.817670 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-04-11 05:14:03.817680 | orchestrator | Saturday 11 April 2026 05:14:02 +0000 (0:00:01.522) 0:03:58.332 ******** 2026-04-11 05:14:03.817690 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 05:14:03.817701 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 05:14:03.817710 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:03.817718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 05:14:03.817727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 05:14:03.817736 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:03.817745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 05:14:03.817753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 05:14:03.817762 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:03.817771 | orchestrator | 2026-04-11 05:14:03.817780 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-04-11 05:14:03.817791 | orchestrator | Saturday 11 April 2026 05:14:03 +0000 (0:00:01.452) 0:03:59.785 ******** 2026-04-11 05:14:03.817801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817812 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817828 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:03.817842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817852 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:03.817866 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:14.498428 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:14.498555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:14.498572 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:14.498586 | orchestrator | 2026-04-11 05:14:14.498599 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-04-11 05:14:14.498611 | orchestrator | Saturday 11 April 2026 05:14:04 +0000 (0:00:01.421) 0:04:01.206 ******** 2026-04-11 05:14:14.498623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'})  2026-04-11 05:14:14.498636 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'})  2026-04-11 05:14:14.498647 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:14.498659 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'})  2026-04-11 05:14:14.498670 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'})  2026-04-11 05:14:14.498681 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:14.498692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'})  2026-04-11 05:14:14.498703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'})  2026-04-11 05:14:14.498715 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:14.498726 | orchestrator | 2026-04-11 05:14:14.498738 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-04-11 05:14:14.498750 | orchestrator | Saturday 11 April 2026 05:14:06 +0000 (0:00:01.713) 0:04:02.920 ******** 2026-04-11 05:14:14.498762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-c5955808-db0e-564c-b1b7-e2d336084003', 'data_vg': 'ceph-c5955808-db0e-564c-b1b7-e2d336084003'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:14.498794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-6808ea3d-3e7e-5ef0-9dd2-f9487250f200', 'data_vg': 'ceph-6808ea3d-3e7e-5ef0-9dd2-f9487250f200'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:14.498806 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:14.498818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-4afe3055-abd0-5615-b44c-a776d8127855', 'data_vg': 'ceph-4afe3055-abd0-5615-b44c-a776d8127855'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:14.498829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-1c2bdb62-89ba-5856-b2e0-5db351397ca2', 'data_vg': 'ceph-1c2bdb62-89ba-5856-b2e0-5db351397ca2'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:14.498840 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:14.498866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-e8a3f20d-ed3f-5f34-b319-d0862efd8412', 'data_vg': 'ceph-e8a3f20d-ed3f-5f34-b319-d0862efd8412'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:14.498895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-a718c651-a264-5d59-a3a1-3dddb23bb056', 'data_vg': 'ceph-a718c651-a264-5d59-a3a1-3dddb23bb056'}, 'ansible_loop_var': 'item'})  2026-04-11 05:14:14.498907 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:14.498918 | orchestrator | 2026-04-11 05:14:14.498930 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-04-11 05:14:14.498942 | orchestrator | Saturday 11 April 2026 05:14:08 +0000 (0:00:01.427) 0:04:04.348 ******** 2026-04-11 05:14:14.498955 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:14.498968 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:14.498981 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:14.498993 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:14.499006 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:14.499019 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:14.499031 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:14.499043 | orchestrator | 2026-04-11 05:14:14.499056 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-04-11 05:14:14.499069 | orchestrator | Saturday 11 April 2026 05:14:10 +0000 (0:00:01.957) 0:04:06.305 ******** 2026-04-11 05:14:14.499082 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:14.499147 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:14.499166 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:14.499183 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:14.499202 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 05:14:14.499221 | orchestrator | 2026-04-11 05:14:14.499236 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-04-11 05:14:14.499247 | orchestrator | Saturday 11 April 2026 05:14:12 +0000 (0:00:02.770) 0:04:09.076 ******** 2026-04-11 05:14:14.499258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499324 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:14.499335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499389 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:14.499400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499468 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:14.499486 | orchestrator | 2026-04-11 05:14:14.499504 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-04-11 05:14:14.499516 | orchestrator | Saturday 11 April 2026 05:14:14 +0000 (0:00:01.515) 0:04:10.592 ******** 2026-04-11 05:14:14.499534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:14.499587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474557 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.474696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474787 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.474799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474855 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.474866 | orchestrator | 2026-04-11 05:14:32.474878 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-04-11 05:14:32.474890 | orchestrator | Saturday 11 April 2026 05:14:16 +0000 (0:00:01.757) 0:04:12.349 ******** 2026-04-11 05:14:32.474901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.474999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.475017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.475036 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.475055 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.475066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.475078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.475129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.475142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.475154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 05:14:32.475167 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.475189 | orchestrator | 2026-04-11 05:14:32.475202 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-04-11 05:14:32.475215 | orchestrator | Saturday 11 April 2026 05:14:17 +0000 (0:00:01.463) 0:04:13.813 ******** 2026-04-11 05:14:32.475228 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:32.475241 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:32.475272 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:32.475286 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.475298 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.475311 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.475323 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:32.475335 | orchestrator | 2026-04-11 05:14:32.475349 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-04-11 05:14:32.475362 | orchestrator | Saturday 11 April 2026 05:14:19 +0000 (0:00:01.962) 0:04:15.775 ******** 2026-04-11 05:14:32.475374 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:32.475387 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:32.475399 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:32.475412 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.475424 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.475437 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.475448 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:32.475459 | orchestrator | 2026-04-11 05:14:32.475470 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-04-11 05:14:32.475481 | orchestrator | Saturday 11 April 2026 05:14:21 +0000 (0:00:02.178) 0:04:17.954 ******** 2026-04-11 05:14:32.475492 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:32.475503 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:32.475513 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:32.475524 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.475535 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.475546 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.475556 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:32.475567 | orchestrator | 2026-04-11 05:14:32.475578 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-04-11 05:14:32.475590 | orchestrator | Saturday 11 April 2026 05:14:23 +0000 (0:00:02.164) 0:04:20.118 ******** 2026-04-11 05:14:32.475601 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:32.475611 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:32.475622 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:32.475633 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.475644 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.475654 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.475665 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:32.475676 | orchestrator | 2026-04-11 05:14:32.475687 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-04-11 05:14:32.475698 | orchestrator | Saturday 11 April 2026 05:14:25 +0000 (0:00:02.087) 0:04:22.206 ******** 2026-04-11 05:14:32.475709 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:32.475720 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:32.475731 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:32.475742 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.475752 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.475763 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.475774 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:32.475785 | orchestrator | 2026-04-11 05:14:32.475796 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-04-11 05:14:32.475807 | orchestrator | Saturday 11 April 2026 05:14:28 +0000 (0:00:02.127) 0:04:24.334 ******** 2026-04-11 05:14:32.475818 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:32.475828 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:32.475839 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:32.475857 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.475868 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.475879 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.475890 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:32.475900 | orchestrator | 2026-04-11 05:14:32.475911 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-04-11 05:14:32.475923 | orchestrator | Saturday 11 April 2026 05:14:29 +0000 (0:00:01.839) 0:04:26.173 ******** 2026-04-11 05:14:32.475933 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:32.475944 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:32.475955 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:32.475966 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:32.475976 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:32.475987 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:32.475998 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:32.476009 | orchestrator | 2026-04-11 05:14:32.476020 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-04-11 05:14:32.476031 | orchestrator | Saturday 11 April 2026 05:14:32 +0000 (0:00:02.282) 0:04:28.456 ******** 2026-04-11 05:14:32.476042 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:32.476055 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:32.476068 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:32.476085 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:32.476115 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:32.476129 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:32.476140 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:32.476157 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:36.714901 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:36.715015 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:36.715041 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:36.715060 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:36.715082 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:36.715186 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:36.715208 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:36.715262 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:36.715284 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:36.715303 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:36.715323 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:36.715337 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:36.715348 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:36.715359 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:36.715370 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:36.715381 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:36.715391 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:36.715402 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:36.715413 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:36.715440 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:36.715454 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:36.715468 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:36.715481 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:36.715514 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:36.715526 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:36.715537 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:36.715548 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:36.715568 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:36.715579 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:36.715590 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:36.715600 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:36.715611 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:36.715622 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:36.715632 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:36.715643 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:36.715654 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:36.715665 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:36.715676 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:36.715686 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:36.715697 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:36.715708 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:36.715719 | orchestrator | 2026-04-11 05:14:36.715731 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-04-11 05:14:36.715742 | orchestrator | Saturday 11 April 2026 05:14:34 +0000 (0:00:02.255) 0:04:30.712 ******** 2026-04-11 05:14:36.715753 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:36.715764 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:36.715775 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:36.715786 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:36.715796 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:36.715807 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:36.715818 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:36.715828 | orchestrator | 2026-04-11 05:14:36.715839 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-04-11 05:14:36.715850 | orchestrator | Saturday 11 April 2026 05:14:36 +0000 (0:00:02.099) 0:04:32.811 ******** 2026-04-11 05:14:36.715866 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:36.715878 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:36.715889 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:36.715908 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:36.715925 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:43.278291 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:43.278452 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:43.278475 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:43.278487 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:43.278498 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:43.278510 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:43.278521 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:43.278533 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:43.278543 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:43.278553 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:43.278563 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:43.278572 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:43.278582 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:43.278592 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:43.278602 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:43.278612 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:43.278622 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:43.278632 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:43.278642 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:43.278678 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:43.278688 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:43.278698 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:43.278708 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:43.278717 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:43.278746 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:43.278759 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:43.278770 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:43.278781 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:43.278792 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:43.278803 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:43.278814 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:43.278825 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:43.278837 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-11 05:14:43.278849 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:43.278860 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-11 05:14:43.278871 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-11 05:14:43.278883 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:43.278973 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:43.278993 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:43.279005 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:43.279023 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-11 05:14:43.279033 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-11 05:14:43.279042 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-11 05:14:43.279052 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:43.279062 | orchestrator | 2026-04-11 05:14:43.279072 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-04-11 05:14:43.279083 | orchestrator | Saturday 11 April 2026 05:14:38 +0000 (0:00:02.247) 0:04:35.059 ******** 2026-04-11 05:14:43.279120 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:43.279141 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:43.279151 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:43.279161 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:43.279171 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:43.279180 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:43.279189 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:43.279199 | orchestrator | 2026-04-11 05:14:43.279209 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-04-11 05:14:43.279218 | orchestrator | Saturday 11 April 2026 05:14:40 +0000 (0:00:02.110) 0:04:37.169 ******** 2026-04-11 05:14:43.279228 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:14:43.279237 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:14:43.279247 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:14:43.279256 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:14:43.279266 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:14:43.279275 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:14:43.279284 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:14:43.279294 | orchestrator | 2026-04-11 05:14:43.279303 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-04-11 05:14:43.279313 | orchestrator | Saturday 11 April 2026 05:14:43 +0000 (0:00:02.199) 0:04:39.369 ******** 2026-04-11 05:14:43.279332 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:24.985680 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:15:24.985787 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:15:24.985800 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:15:24.985806 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:15:24.985813 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:15:24.985819 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:15:24.985826 | orchestrator | 2026-04-11 05:15:24.985833 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-11 05:15:24.985840 | orchestrator | Saturday 11 April 2026 05:14:45 +0000 (0:00:02.349) 0:04:41.718 ******** 2026-04-11 05:15:24.985849 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-11 05:15:24.985857 | orchestrator | 2026-04-11 05:15:24.985864 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-04-11 05:15:24.985871 | orchestrator | Saturday 11 April 2026 05:14:48 +0000 (0:00:02.742) 0:04:44.461 ******** 2026-04-11 05:15:24.985878 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-11 05:15:24.985885 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-11 05:15:24.985892 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-11 05:15:24.985898 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-11 05:15:24.985925 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-11 05:15:24.985932 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-11 05:15:24.985938 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-11 05:15:24.985944 | orchestrator | 2026-04-11 05:15:24.985950 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-04-11 05:15:24.985957 | orchestrator | Saturday 11 April 2026 05:14:50 +0000 (0:00:02.129) 0:04:46.591 ******** 2026-04-11 05:15:24.985963 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:24.985970 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:15:24.985976 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:15:24.985982 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:15:24.985988 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:15:24.985995 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:15:24.986001 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:15:24.986007 | orchestrator | 2026-04-11 05:15:24.986013 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-04-11 05:15:24.986064 | orchestrator | Saturday 11 April 2026 05:14:52 +0000 (0:00:02.154) 0:04:48.745 ******** 2026-04-11 05:15:24.986127 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:24.986135 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:15:24.986141 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:15:24.986147 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:15:24.986153 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:15:24.986160 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:15:24.986166 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:15:24.986173 | orchestrator | 2026-04-11 05:15:24.986179 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-04-11 05:15:24.986186 | orchestrator | Saturday 11 April 2026 05:14:54 +0000 (0:00:02.022) 0:04:50.768 ******** 2026-04-11 05:15:24.986192 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:24.986197 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:15:24.986202 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:15:24.986208 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:15:24.986214 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:15:24.986221 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:15:24.986228 | orchestrator | ok: [testbed-manager] 2026-04-11 05:15:24.986234 | orchestrator | 2026-04-11 05:15:24.986241 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-04-11 05:15:24.986247 | orchestrator | Saturday 11 April 2026 05:14:57 +0000 (0:00:02.622) 0:04:53.391 ******** 2026-04-11 05:15:24.986253 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:24.986259 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:15:24.986266 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:15:24.986272 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:15:24.986279 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:15:24.986285 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:15:24.986292 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:15:24.986298 | orchestrator | 2026-04-11 05:15:24.986305 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-11 05:15:24.986324 | orchestrator | Saturday 11 April 2026 05:14:59 +0000 (0:00:02.331) 0:04:55.722 ******** 2026-04-11 05:15:24.986331 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:24.986338 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:15:24.986345 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:15:24.986351 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:15:24.986357 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:15:24.986364 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:15:24.986370 | orchestrator | skipping: [testbed-manager] 2026-04-11 05:15:24.986377 | orchestrator | 2026-04-11 05:15:24.986383 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-04-11 05:15:24.986397 | orchestrator | Saturday 11 April 2026 05:15:01 +0000 (0:00:02.438) 0:04:58.160 ******** 2026-04-11 05:15:24.986404 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:24.986410 | orchestrator | 2026-04-11 05:15:24.986417 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-04-11 05:15:24.986424 | orchestrator | Saturday 11 April 2026 05:15:04 +0000 (0:00:02.695) 0:05:00.856 ******** 2026-04-11 05:15:24.986430 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:24.986436 | orchestrator | 2026-04-11 05:15:24.986443 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-04-11 05:15:24.986450 | orchestrator | 2026-04-11 05:15:24.986470 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:15:24.986477 | orchestrator | Saturday 11 April 2026 05:15:06 +0000 (0:00:01.469) 0:05:02.325 ******** 2026-04-11 05:15:24.986483 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:24.986489 | orchestrator | 2026-04-11 05:15:24.986496 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:15:24.986502 | orchestrator | Saturday 11 April 2026 05:15:07 +0000 (0:00:01.439) 0:05:03.765 ******** 2026-04-11 05:15:24.986508 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:24.986514 | orchestrator | 2026-04-11 05:15:24.986521 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-04-11 05:15:24.986527 | orchestrator | Saturday 11 April 2026 05:15:08 +0000 (0:00:01.172) 0:05:04.938 ******** 2026-04-11 05:15:24.986535 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-11 05:15:24.986544 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-11 05:15:24.986551 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-11 05:15:24.986558 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-11 05:15:24.986566 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-11 05:15:24.986574 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}])  2026-04-11 05:15:24.986582 | orchestrator | 2026-04-11 05:15:24.986589 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-11 05:15:24.986601 | orchestrator | 2026-04-11 05:15:24.986608 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-11 05:15:24.986614 | orchestrator | Saturday 11 April 2026 05:15:18 +0000 (0:00:09.744) 0:05:14.682 ******** 2026-04-11 05:15:24.986620 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:24.986626 | orchestrator | 2026-04-11 05:15:24.986633 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-11 05:15:24.986643 | orchestrator | Saturday 11 April 2026 05:15:19 +0000 (0:00:01.458) 0:05:16.141 ******** 2026-04-11 05:15:24.986649 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:24.986656 | orchestrator | 2026-04-11 05:15:24.986662 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-11 05:15:24.986668 | orchestrator | Saturday 11 April 2026 05:15:21 +0000 (0:00:01.194) 0:05:17.336 ******** 2026-04-11 05:15:24.986674 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:24.986680 | orchestrator | 2026-04-11 05:15:24.986686 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-11 05:15:24.986692 | orchestrator | Saturday 11 April 2026 05:15:22 +0000 (0:00:01.144) 0:05:18.480 ******** 2026-04-11 05:15:24.986698 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:24.986704 | orchestrator | 2026-04-11 05:15:24.986711 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:15:24.986717 | orchestrator | Saturday 11 April 2026 05:15:23 +0000 (0:00:01.186) 0:05:19.667 ******** 2026-04-11 05:15:24.986723 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-11 05:15:24.986729 | orchestrator | 2026-04-11 05:15:24.986736 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:15:24.986742 | orchestrator | Saturday 11 April 2026 05:15:24 +0000 (0:00:01.158) 0:05:20.826 ******** 2026-04-11 05:15:24.986754 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.630827 | orchestrator | 2026-04-11 05:15:49.630946 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:15:49.630965 | orchestrator | Saturday 11 April 2026 05:15:26 +0000 (0:00:01.492) 0:05:22.318 ******** 2026-04-11 05:15:49.630978 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.630990 | orchestrator | 2026-04-11 05:15:49.631002 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:15:49.631013 | orchestrator | Saturday 11 April 2026 05:15:27 +0000 (0:00:01.216) 0:05:23.535 ******** 2026-04-11 05:15:49.631024 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.631035 | orchestrator | 2026-04-11 05:15:49.631046 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:15:49.631058 | orchestrator | Saturday 11 April 2026 05:15:28 +0000 (0:00:01.468) 0:05:25.003 ******** 2026-04-11 05:15:49.631120 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.631137 | orchestrator | 2026-04-11 05:15:49.631149 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:15:49.631160 | orchestrator | Saturday 11 April 2026 05:15:29 +0000 (0:00:01.185) 0:05:26.189 ******** 2026-04-11 05:15:49.631171 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.631182 | orchestrator | 2026-04-11 05:15:49.631193 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:15:49.631205 | orchestrator | Saturday 11 April 2026 05:15:31 +0000 (0:00:01.159) 0:05:27.348 ******** 2026-04-11 05:15:49.631216 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.631227 | orchestrator | 2026-04-11 05:15:49.631238 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:15:49.631250 | orchestrator | Saturday 11 April 2026 05:15:32 +0000 (0:00:01.182) 0:05:28.531 ******** 2026-04-11 05:15:49.631262 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:49.631274 | orchestrator | 2026-04-11 05:15:49.631285 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:15:49.631296 | orchestrator | Saturday 11 April 2026 05:15:33 +0000 (0:00:01.147) 0:05:29.678 ******** 2026-04-11 05:15:49.631332 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.631344 | orchestrator | 2026-04-11 05:15:49.631355 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:15:49.631367 | orchestrator | Saturday 11 April 2026 05:15:34 +0000 (0:00:01.125) 0:05:30.804 ******** 2026-04-11 05:15:49.631381 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:15:49.631394 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:15:49.631406 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:15:49.631418 | orchestrator | 2026-04-11 05:15:49.631430 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:15:49.631443 | orchestrator | Saturday 11 April 2026 05:15:36 +0000 (0:00:01.631) 0:05:32.436 ******** 2026-04-11 05:15:49.631455 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.631467 | orchestrator | 2026-04-11 05:15:49.631481 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:15:49.631501 | orchestrator | Saturday 11 April 2026 05:15:37 +0000 (0:00:01.251) 0:05:33.687 ******** 2026-04-11 05:15:49.631519 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:15:49.631538 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:15:49.631556 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:15:49.631575 | orchestrator | 2026-04-11 05:15:49.631593 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:15:49.631610 | orchestrator | Saturday 11 April 2026 05:15:40 +0000 (0:00:03.227) 0:05:36.915 ******** 2026-04-11 05:15:49.631628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 05:15:49.631649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 05:15:49.631668 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 05:15:49.631686 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:49.631705 | orchestrator | 2026-04-11 05:15:49.631725 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:15:49.631745 | orchestrator | Saturday 11 April 2026 05:15:42 +0000 (0:00:01.426) 0:05:38.342 ******** 2026-04-11 05:15:49.631764 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:15:49.631801 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:15:49.631814 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:15:49.631825 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:49.631836 | orchestrator | 2026-04-11 05:15:49.631847 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:15:49.631858 | orchestrator | Saturday 11 April 2026 05:15:44 +0000 (0:00:01.991) 0:05:40.333 ******** 2026-04-11 05:15:49.631892 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:15:49.631908 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:15:49.631931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:15:49.631942 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:49.631953 | orchestrator | 2026-04-11 05:15:49.631964 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:15:49.631975 | orchestrator | Saturday 11 April 2026 05:15:45 +0000 (0:00:01.196) 0:05:41.530 ******** 2026-04-11 05:15:49.631989 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1b0d6fe4ad27', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:15:37.995979', 'end': '2026-04-11 05:15:38.056646', 'delta': '0:00:00.060667', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1b0d6fe4ad27'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:15:49.632004 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1a56ecc96cb4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:15:38.609461', 'end': '2026-04-11 05:15:38.669438', 'delta': '0:00:00.059977', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1a56ecc96cb4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:15:49.632022 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f023dde40a6c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:15:39.527162', 'end': '2026-04-11 05:15:39.573472', 'delta': '0:00:00.046310', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f023dde40a6c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:15:49.632034 | orchestrator | 2026-04-11 05:15:49.632045 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:15:49.632056 | orchestrator | Saturday 11 April 2026 05:15:46 +0000 (0:00:01.231) 0:05:42.761 ******** 2026-04-11 05:15:49.632095 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.632107 | orchestrator | 2026-04-11 05:15:49.632118 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:15:49.632129 | orchestrator | Saturday 11 April 2026 05:15:48 +0000 (0:00:01.661) 0:05:44.423 ******** 2026-04-11 05:15:49.632140 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:15:49.632158 | orchestrator | 2026-04-11 05:15:49.632169 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:15:49.632180 | orchestrator | Saturday 11 April 2026 05:15:49 +0000 (0:00:01.258) 0:05:45.681 ******** 2026-04-11 05:15:49.632191 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:15:49.632202 | orchestrator | 2026-04-11 05:16:04.328247 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:16:04.328368 | orchestrator | Saturday 11 April 2026 05:15:50 +0000 (0:00:01.137) 0:05:46.819 ******** 2026-04-11 05:16:04.328384 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:16:04.328397 | orchestrator | 2026-04-11 05:16:04.328409 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:16:04.328420 | orchestrator | Saturday 11 April 2026 05:15:52 +0000 (0:00:02.023) 0:05:48.843 ******** 2026-04-11 05:16:04.328431 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:16:04.328443 | orchestrator | 2026-04-11 05:16:04.328454 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:16:04.328465 | orchestrator | Saturday 11 April 2026 05:15:53 +0000 (0:00:01.217) 0:05:50.060 ******** 2026-04-11 05:16:04.328478 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.328489 | orchestrator | 2026-04-11 05:16:04.328500 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:16:04.328511 | orchestrator | Saturday 11 April 2026 05:15:54 +0000 (0:00:01.130) 0:05:51.191 ******** 2026-04-11 05:16:04.328522 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.328533 | orchestrator | 2026-04-11 05:16:04.328544 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:16:04.328555 | orchestrator | Saturday 11 April 2026 05:15:56 +0000 (0:00:01.319) 0:05:52.511 ******** 2026-04-11 05:16:04.328566 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.328577 | orchestrator | 2026-04-11 05:16:04.328588 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:16:04.328599 | orchestrator | Saturday 11 April 2026 05:15:57 +0000 (0:00:01.104) 0:05:53.615 ******** 2026-04-11 05:16:04.328610 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.328620 | orchestrator | 2026-04-11 05:16:04.328638 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:16:04.328657 | orchestrator | Saturday 11 April 2026 05:15:58 +0000 (0:00:01.129) 0:05:54.744 ******** 2026-04-11 05:16:04.328677 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.328697 | orchestrator | 2026-04-11 05:16:04.328716 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:16:04.328734 | orchestrator | Saturday 11 April 2026 05:15:59 +0000 (0:00:01.101) 0:05:55.846 ******** 2026-04-11 05:16:04.328752 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.328771 | orchestrator | 2026-04-11 05:16:04.328790 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:16:04.328809 | orchestrator | Saturday 11 April 2026 05:16:00 +0000 (0:00:01.152) 0:05:56.998 ******** 2026-04-11 05:16:04.328829 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.328849 | orchestrator | 2026-04-11 05:16:04.328871 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:16:04.328892 | orchestrator | Saturday 11 April 2026 05:16:01 +0000 (0:00:01.147) 0:05:58.146 ******** 2026-04-11 05:16:04.328914 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.328934 | orchestrator | 2026-04-11 05:16:04.328953 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:16:04.328975 | orchestrator | Saturday 11 April 2026 05:16:03 +0000 (0:00:01.125) 0:05:59.272 ******** 2026-04-11 05:16:04.328996 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:04.329016 | orchestrator | 2026-04-11 05:16:04.329030 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:16:04.329041 | orchestrator | Saturday 11 April 2026 05:16:04 +0000 (0:00:01.107) 0:06:00.379 ******** 2026-04-11 05:16:04.329083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:16:04.329141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:16:04.329155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:16:04.329190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:16:04.329214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:16:04.329234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:16:04.329252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:16:04.329284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4dd7cb49', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:16:04.329320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:16:04.329353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:16:05.549469 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:16:05.549574 | orchestrator | 2026-04-11 05:16:05.549590 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:16:05.549603 | orchestrator | Saturday 11 April 2026 05:16:05 +0000 (0:00:01.244) 0:06:01.624 ******** 2026-04-11 05:16:05.549619 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549635 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549647 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549684 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549711 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549724 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549753 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4dd7cb49', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549798 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:16:05.549816 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:17:04.108495 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.108610 | orchestrator | 2026-04-11 05:17:04.108630 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:17:04.108648 | orchestrator | Saturday 11 April 2026 05:16:06 +0000 (0:00:01.226) 0:06:02.851 ******** 2026-04-11 05:17:04.108663 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:04.108679 | orchestrator | 2026-04-11 05:17:04.108694 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:17:04.108709 | orchestrator | Saturday 11 April 2026 05:16:08 +0000 (0:00:01.552) 0:06:04.404 ******** 2026-04-11 05:17:04.108724 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:04.108738 | orchestrator | 2026-04-11 05:17:04.108752 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:17:04.108766 | orchestrator | Saturday 11 April 2026 05:16:09 +0000 (0:00:01.116) 0:06:05.520 ******** 2026-04-11 05:17:04.108781 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:04.108795 | orchestrator | 2026-04-11 05:17:04.108811 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:17:04.108826 | orchestrator | Saturday 11 April 2026 05:16:10 +0000 (0:00:01.507) 0:06:07.028 ******** 2026-04-11 05:17:04.108835 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.108844 | orchestrator | 2026-04-11 05:17:04.108876 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:17:04.108885 | orchestrator | Saturday 11 April 2026 05:16:12 +0000 (0:00:01.218) 0:06:08.247 ******** 2026-04-11 05:17:04.108894 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.108903 | orchestrator | 2026-04-11 05:17:04.108911 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:17:04.108920 | orchestrator | Saturday 11 April 2026 05:16:13 +0000 (0:00:01.268) 0:06:09.515 ******** 2026-04-11 05:17:04.108929 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.108938 | orchestrator | 2026-04-11 05:17:04.108947 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:17:04.108955 | orchestrator | Saturday 11 April 2026 05:16:14 +0000 (0:00:01.329) 0:06:10.844 ******** 2026-04-11 05:17:04.108964 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:17:04.108973 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 05:17:04.108981 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 05:17:04.108990 | orchestrator | 2026-04-11 05:17:04.108999 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:17:04.109007 | orchestrator | Saturday 11 April 2026 05:16:16 +0000 (0:00:01.989) 0:06:12.833 ******** 2026-04-11 05:17:04.109016 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 05:17:04.109054 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 05:17:04.109064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 05:17:04.109074 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.109084 | orchestrator | 2026-04-11 05:17:04.109094 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:17:04.109105 | orchestrator | Saturday 11 April 2026 05:16:17 +0000 (0:00:01.150) 0:06:13.984 ******** 2026-04-11 05:17:04.109115 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.109125 | orchestrator | 2026-04-11 05:17:04.109135 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:17:04.109146 | orchestrator | Saturday 11 April 2026 05:16:18 +0000 (0:00:01.149) 0:06:15.134 ******** 2026-04-11 05:17:04.109156 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:17:04.109166 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:17:04.109177 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:17:04.109188 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:17:04.109198 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:17:04.109208 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:17:04.109231 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:17:04.109241 | orchestrator | 2026-04-11 05:17:04.109251 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:17:04.109261 | orchestrator | Saturday 11 April 2026 05:16:21 +0000 (0:00:02.084) 0:06:17.218 ******** 2026-04-11 05:17:04.109272 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:17:04.109282 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:17:04.109293 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:17:04.109303 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:17:04.109313 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:17:04.109322 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:17:04.109332 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:17:04.109349 | orchestrator | 2026-04-11 05:17:04.109360 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-11 05:17:04.109370 | orchestrator | Saturday 11 April 2026 05:16:23 +0000 (0:00:02.974) 0:06:20.193 ******** 2026-04-11 05:17:04.109381 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:17:04.109389 | orchestrator | 2026-04-11 05:17:04.109398 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-11 05:17:04.109407 | orchestrator | Saturday 11 April 2026 05:16:26 +0000 (0:00:02.219) 0:06:22.412 ******** 2026-04-11 05:17:04.109430 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.109439 | orchestrator | 2026-04-11 05:17:04.109448 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-11 05:17:04.109457 | orchestrator | Saturday 11 April 2026 05:16:27 +0000 (0:00:01.229) 0:06:23.642 ******** 2026-04-11 05:17:04.109465 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.109474 | orchestrator | 2026-04-11 05:17:04.109482 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-11 05:17:04.109491 | orchestrator | Saturday 11 April 2026 05:16:28 +0000 (0:00:01.155) 0:06:24.797 ******** 2026-04-11 05:17:04.109500 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:17:04.109509 | orchestrator | 2026-04-11 05:17:04.109517 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-11 05:17:04.109526 | orchestrator | Saturday 11 April 2026 05:16:30 +0000 (0:00:02.325) 0:06:27.123 ******** 2026-04-11 05:17:04.109534 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.109543 | orchestrator | 2026-04-11 05:17:04.109551 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-11 05:17:04.109560 | orchestrator | Saturday 11 April 2026 05:16:32 +0000 (0:00:01.175) 0:06:28.299 ******** 2026-04-11 05:17:04.109568 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:17:04.109577 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:17:04.109585 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:17:04.109594 | orchestrator | 2026-04-11 05:17:04.109603 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-11 05:17:04.109611 | orchestrator | Saturday 11 April 2026 05:16:34 +0000 (0:00:02.489) 0:06:30.788 ******** 2026-04-11 05:17:04.109620 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-04-11 05:17:04.109629 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-04-11 05:17:04.109639 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-04-11 05:17:04.109647 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-04-11 05:17:04.109656 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-04-11 05:17:04.109665 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-04-11 05:17:04.109673 | orchestrator | 2026-04-11 05:17:04.109682 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-11 05:17:04.109691 | orchestrator | Saturday 11 April 2026 05:16:47 +0000 (0:00:13.253) 0:06:44.042 ******** 2026-04-11 05:17:04.109699 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:17:04.109708 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:17:04.109716 | orchestrator | 2026-04-11 05:17:04.109725 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-11 05:17:04.109733 | orchestrator | Saturday 11 April 2026 05:16:51 +0000 (0:00:03.951) 0:06:47.994 ******** 2026-04-11 05:17:04.109742 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:17:04.109751 | orchestrator | 2026-04-11 05:17:04.109759 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:17:04.109774 | orchestrator | Saturday 11 April 2026 05:16:54 +0000 (0:00:02.542) 0:06:50.537 ******** 2026-04-11 05:17:04.109782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-11 05:17:04.109791 | orchestrator | 2026-04-11 05:17:04.109800 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:17:04.109809 | orchestrator | Saturday 11 April 2026 05:16:55 +0000 (0:00:01.534) 0:06:52.072 ******** 2026-04-11 05:17:04.109817 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-11 05:17:04.109826 | orchestrator | 2026-04-11 05:17:04.109839 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:17:04.109848 | orchestrator | Saturday 11 April 2026 05:16:57 +0000 (0:00:01.599) 0:06:53.671 ******** 2026-04-11 05:17:04.109856 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:04.109865 | orchestrator | 2026-04-11 05:17:04.109874 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:17:04.109882 | orchestrator | Saturday 11 April 2026 05:16:59 +0000 (0:00:01.580) 0:06:55.252 ******** 2026-04-11 05:17:04.109891 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.109899 | orchestrator | 2026-04-11 05:17:04.109908 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:17:04.109917 | orchestrator | Saturday 11 April 2026 05:17:00 +0000 (0:00:01.125) 0:06:56.377 ******** 2026-04-11 05:17:04.109925 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.109934 | orchestrator | 2026-04-11 05:17:04.109943 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:17:04.109951 | orchestrator | Saturday 11 April 2026 05:17:01 +0000 (0:00:01.132) 0:06:57.509 ******** 2026-04-11 05:17:04.109960 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.109969 | orchestrator | 2026-04-11 05:17:04.109977 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:17:04.109986 | orchestrator | Saturday 11 April 2026 05:17:02 +0000 (0:00:01.166) 0:06:58.675 ******** 2026-04-11 05:17:04.109995 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:04.110003 | orchestrator | 2026-04-11 05:17:04.110012 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:17:04.110128 | orchestrator | Saturday 11 April 2026 05:17:03 +0000 (0:00:01.501) 0:07:00.177 ******** 2026-04-11 05:17:04.110145 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:04.110161 | orchestrator | 2026-04-11 05:17:04.110185 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:17:54.077612 | orchestrator | Saturday 11 April 2026 05:17:05 +0000 (0:00:01.112) 0:07:01.290 ******** 2026-04-11 05:17:54.077729 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.077747 | orchestrator | 2026-04-11 05:17:54.077760 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:17:54.077772 | orchestrator | Saturday 11 April 2026 05:17:06 +0000 (0:00:01.169) 0:07:02.460 ******** 2026-04-11 05:17:54.077783 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.077794 | orchestrator | 2026-04-11 05:17:54.077805 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:17:54.077816 | orchestrator | Saturday 11 April 2026 05:17:07 +0000 (0:00:01.492) 0:07:03.952 ******** 2026-04-11 05:17:54.077827 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.077837 | orchestrator | 2026-04-11 05:17:54.077848 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:17:54.077859 | orchestrator | Saturday 11 April 2026 05:17:09 +0000 (0:00:01.504) 0:07:05.457 ******** 2026-04-11 05:17:54.077870 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.077881 | orchestrator | 2026-04-11 05:17:54.077892 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:17:54.077902 | orchestrator | Saturday 11 April 2026 05:17:10 +0000 (0:00:01.159) 0:07:06.616 ******** 2026-04-11 05:17:54.077936 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.077947 | orchestrator | 2026-04-11 05:17:54.077958 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:17:54.077971 | orchestrator | Saturday 11 April 2026 05:17:11 +0000 (0:00:01.144) 0:07:07.761 ******** 2026-04-11 05:17:54.077990 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078102 | orchestrator | 2026-04-11 05:17:54.078127 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:17:54.078144 | orchestrator | Saturday 11 April 2026 05:17:12 +0000 (0:00:01.113) 0:07:08.875 ******** 2026-04-11 05:17:54.078163 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078181 | orchestrator | 2026-04-11 05:17:54.078200 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:17:54.078219 | orchestrator | Saturday 11 April 2026 05:17:13 +0000 (0:00:01.141) 0:07:10.016 ******** 2026-04-11 05:17:54.078238 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078256 | orchestrator | 2026-04-11 05:17:54.078275 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:17:54.078287 | orchestrator | Saturday 11 April 2026 05:17:15 +0000 (0:00:01.203) 0:07:11.219 ******** 2026-04-11 05:17:54.078298 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078309 | orchestrator | 2026-04-11 05:17:54.078319 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:17:54.078330 | orchestrator | Saturday 11 April 2026 05:17:16 +0000 (0:00:01.146) 0:07:12.366 ******** 2026-04-11 05:17:54.078341 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078351 | orchestrator | 2026-04-11 05:17:54.078362 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:17:54.078373 | orchestrator | Saturday 11 April 2026 05:17:17 +0000 (0:00:01.229) 0:07:13.595 ******** 2026-04-11 05:17:54.078383 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.078394 | orchestrator | 2026-04-11 05:17:54.078405 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:17:54.078416 | orchestrator | Saturday 11 April 2026 05:17:18 +0000 (0:00:01.201) 0:07:14.797 ******** 2026-04-11 05:17:54.078426 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.078437 | orchestrator | 2026-04-11 05:17:54.078448 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:17:54.078458 | orchestrator | Saturday 11 April 2026 05:17:19 +0000 (0:00:01.169) 0:07:15.967 ******** 2026-04-11 05:17:54.078469 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.078480 | orchestrator | 2026-04-11 05:17:54.078490 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:17:54.078502 | orchestrator | Saturday 11 April 2026 05:17:20 +0000 (0:00:01.195) 0:07:17.162 ******** 2026-04-11 05:17:54.078513 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078524 | orchestrator | 2026-04-11 05:17:54.078534 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:17:54.078561 | orchestrator | Saturday 11 April 2026 05:17:22 +0000 (0:00:01.113) 0:07:18.275 ******** 2026-04-11 05:17:54.078572 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078583 | orchestrator | 2026-04-11 05:17:54.078594 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:17:54.078604 | orchestrator | Saturday 11 April 2026 05:17:23 +0000 (0:00:01.128) 0:07:19.404 ******** 2026-04-11 05:17:54.078615 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078626 | orchestrator | 2026-04-11 05:17:54.078637 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:17:54.078647 | orchestrator | Saturday 11 April 2026 05:17:24 +0000 (0:00:01.166) 0:07:20.571 ******** 2026-04-11 05:17:54.078658 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078669 | orchestrator | 2026-04-11 05:17:54.078679 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:17:54.078690 | orchestrator | Saturday 11 April 2026 05:17:25 +0000 (0:00:01.134) 0:07:21.706 ******** 2026-04-11 05:17:54.078716 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078727 | orchestrator | 2026-04-11 05:17:54.078737 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:17:54.078748 | orchestrator | Saturday 11 April 2026 05:17:26 +0000 (0:00:01.142) 0:07:22.848 ******** 2026-04-11 05:17:54.078759 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078769 | orchestrator | 2026-04-11 05:17:54.078780 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:17:54.078791 | orchestrator | Saturday 11 April 2026 05:17:27 +0000 (0:00:01.150) 0:07:23.999 ******** 2026-04-11 05:17:54.078801 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078812 | orchestrator | 2026-04-11 05:17:54.078823 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:17:54.078835 | orchestrator | Saturday 11 April 2026 05:17:28 +0000 (0:00:01.128) 0:07:25.128 ******** 2026-04-11 05:17:54.078865 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078881 | orchestrator | 2026-04-11 05:17:54.078892 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:17:54.078903 | orchestrator | Saturday 11 April 2026 05:17:30 +0000 (0:00:01.130) 0:07:26.258 ******** 2026-04-11 05:17:54.078914 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078925 | orchestrator | 2026-04-11 05:17:54.078936 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:17:54.078946 | orchestrator | Saturday 11 April 2026 05:17:31 +0000 (0:00:01.115) 0:07:27.374 ******** 2026-04-11 05:17:54.078957 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.078968 | orchestrator | 2026-04-11 05:17:54.078978 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:17:54.078989 | orchestrator | Saturday 11 April 2026 05:17:32 +0000 (0:00:01.127) 0:07:28.502 ******** 2026-04-11 05:17:54.079000 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.079043 | orchestrator | 2026-04-11 05:17:54.079054 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:17:54.079065 | orchestrator | Saturday 11 April 2026 05:17:33 +0000 (0:00:01.138) 0:07:29.641 ******** 2026-04-11 05:17:54.079075 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.079086 | orchestrator | 2026-04-11 05:17:54.079097 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:17:54.079108 | orchestrator | Saturday 11 April 2026 05:17:34 +0000 (0:00:01.122) 0:07:30.763 ******** 2026-04-11 05:17:54.079118 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.079129 | orchestrator | 2026-04-11 05:17:54.079140 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:17:54.079151 | orchestrator | Saturday 11 April 2026 05:17:36 +0000 (0:00:01.904) 0:07:32.667 ******** 2026-04-11 05:17:54.079161 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.079172 | orchestrator | 2026-04-11 05:17:54.079183 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:17:54.079194 | orchestrator | Saturday 11 April 2026 05:17:38 +0000 (0:00:02.339) 0:07:35.007 ******** 2026-04-11 05:17:54.079205 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-11 05:17:54.079217 | orchestrator | 2026-04-11 05:17:54.079228 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:17:54.079238 | orchestrator | Saturday 11 April 2026 05:17:40 +0000 (0:00:01.729) 0:07:36.737 ******** 2026-04-11 05:17:54.079249 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.079260 | orchestrator | 2026-04-11 05:17:54.079271 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:17:54.079281 | orchestrator | Saturday 11 April 2026 05:17:41 +0000 (0:00:01.136) 0:07:37.874 ******** 2026-04-11 05:17:54.079292 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.079303 | orchestrator | 2026-04-11 05:17:54.079314 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:17:54.079324 | orchestrator | Saturday 11 April 2026 05:17:42 +0000 (0:00:01.168) 0:07:39.042 ******** 2026-04-11 05:17:54.079342 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:17:54.079353 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:17:54.079364 | orchestrator | 2026-04-11 05:17:54.079375 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:17:54.079386 | orchestrator | Saturday 11 April 2026 05:17:44 +0000 (0:00:01.836) 0:07:40.879 ******** 2026-04-11 05:17:54.079396 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.079407 | orchestrator | 2026-04-11 05:17:54.079418 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:17:54.079429 | orchestrator | Saturday 11 April 2026 05:17:46 +0000 (0:00:01.590) 0:07:42.470 ******** 2026-04-11 05:17:54.079439 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.079450 | orchestrator | 2026-04-11 05:17:54.079461 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:17:54.079472 | orchestrator | Saturday 11 April 2026 05:17:47 +0000 (0:00:01.157) 0:07:43.628 ******** 2026-04-11 05:17:54.079483 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.079494 | orchestrator | 2026-04-11 05:17:54.079510 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:17:54.079521 | orchestrator | Saturday 11 April 2026 05:17:48 +0000 (0:00:01.156) 0:07:44.785 ******** 2026-04-11 05:17:54.079532 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.079543 | orchestrator | 2026-04-11 05:17:54.079554 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:17:54.079564 | orchestrator | Saturday 11 April 2026 05:17:49 +0000 (0:00:01.120) 0:07:45.906 ******** 2026-04-11 05:17:54.079575 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-11 05:17:54.079586 | orchestrator | 2026-04-11 05:17:54.079597 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:17:54.079608 | orchestrator | Saturday 11 April 2026 05:17:51 +0000 (0:00:01.531) 0:07:47.437 ******** 2026-04-11 05:17:54.079618 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:17:54.079629 | orchestrator | 2026-04-11 05:17:54.079640 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:17:54.079651 | orchestrator | Saturday 11 April 2026 05:17:52 +0000 (0:00:01.668) 0:07:49.106 ******** 2026-04-11 05:17:54.079662 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:17:54.079673 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:17:54.079683 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:17:54.079694 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:17:54.079705 | orchestrator | 2026-04-11 05:17:54.079716 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:17:54.079734 | orchestrator | Saturday 11 April 2026 05:17:54 +0000 (0:00:01.172) 0:07:50.278 ******** 2026-04-11 05:18:42.664089 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664193 | orchestrator | 2026-04-11 05:18:42.664206 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:18:42.664214 | orchestrator | Saturday 11 April 2026 05:17:55 +0000 (0:00:01.129) 0:07:51.407 ******** 2026-04-11 05:18:42.664221 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664226 | orchestrator | 2026-04-11 05:18:42.664230 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:18:42.664234 | orchestrator | Saturday 11 April 2026 05:17:56 +0000 (0:00:01.231) 0:07:52.639 ******** 2026-04-11 05:18:42.664238 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664242 | orchestrator | 2026-04-11 05:18:42.664246 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:18:42.664250 | orchestrator | Saturday 11 April 2026 05:17:57 +0000 (0:00:01.136) 0:07:53.775 ******** 2026-04-11 05:18:42.664271 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664275 | orchestrator | 2026-04-11 05:18:42.664279 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:18:42.664283 | orchestrator | Saturday 11 April 2026 05:17:58 +0000 (0:00:01.177) 0:07:54.953 ******** 2026-04-11 05:18:42.664287 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664290 | orchestrator | 2026-04-11 05:18:42.664294 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:18:42.664298 | orchestrator | Saturday 11 April 2026 05:17:59 +0000 (0:00:01.190) 0:07:56.143 ******** 2026-04-11 05:18:42.664302 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:18:42.664306 | orchestrator | 2026-04-11 05:18:42.664310 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:18:42.664315 | orchestrator | Saturday 11 April 2026 05:18:02 +0000 (0:00:02.530) 0:07:58.674 ******** 2026-04-11 05:18:42.664319 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:18:42.664322 | orchestrator | 2026-04-11 05:18:42.664326 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:18:42.664330 | orchestrator | Saturday 11 April 2026 05:18:03 +0000 (0:00:01.140) 0:07:59.814 ******** 2026-04-11 05:18:42.664334 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-11 05:18:42.664338 | orchestrator | 2026-04-11 05:18:42.664342 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:18:42.664346 | orchestrator | Saturday 11 April 2026 05:18:05 +0000 (0:00:01.483) 0:08:01.297 ******** 2026-04-11 05:18:42.664350 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664354 | orchestrator | 2026-04-11 05:18:42.664357 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:18:42.664361 | orchestrator | Saturday 11 April 2026 05:18:06 +0000 (0:00:01.156) 0:08:02.454 ******** 2026-04-11 05:18:42.664365 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664369 | orchestrator | 2026-04-11 05:18:42.664372 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:18:42.664376 | orchestrator | Saturday 11 April 2026 05:18:07 +0000 (0:00:01.174) 0:08:03.628 ******** 2026-04-11 05:18:42.664380 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664384 | orchestrator | 2026-04-11 05:18:42.664387 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:18:42.664391 | orchestrator | Saturday 11 April 2026 05:18:08 +0000 (0:00:01.227) 0:08:04.855 ******** 2026-04-11 05:18:42.664395 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664399 | orchestrator | 2026-04-11 05:18:42.664403 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:18:42.664406 | orchestrator | Saturday 11 April 2026 05:18:09 +0000 (0:00:01.164) 0:08:06.019 ******** 2026-04-11 05:18:42.664410 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664414 | orchestrator | 2026-04-11 05:18:42.664418 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:18:42.664421 | orchestrator | Saturday 11 April 2026 05:18:10 +0000 (0:00:01.130) 0:08:07.150 ******** 2026-04-11 05:18:42.664425 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664429 | orchestrator | 2026-04-11 05:18:42.664433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:18:42.664447 | orchestrator | Saturday 11 April 2026 05:18:12 +0000 (0:00:01.168) 0:08:08.318 ******** 2026-04-11 05:18:42.664451 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664454 | orchestrator | 2026-04-11 05:18:42.664458 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:18:42.664462 | orchestrator | Saturday 11 April 2026 05:18:13 +0000 (0:00:01.151) 0:08:09.470 ******** 2026-04-11 05:18:42.664466 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664469 | orchestrator | 2026-04-11 05:18:42.664473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:18:42.664481 | orchestrator | Saturday 11 April 2026 05:18:14 +0000 (0:00:01.141) 0:08:10.612 ******** 2026-04-11 05:18:42.664485 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:18:42.664489 | orchestrator | 2026-04-11 05:18:42.664493 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:18:42.664496 | orchestrator | Saturday 11 April 2026 05:18:15 +0000 (0:00:01.316) 0:08:11.929 ******** 2026-04-11 05:18:42.664500 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-11 05:18:42.664505 | orchestrator | 2026-04-11 05:18:42.664508 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:18:42.664512 | orchestrator | Saturday 11 April 2026 05:18:17 +0000 (0:00:01.484) 0:08:13.413 ******** 2026-04-11 05:18:42.664516 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-11 05:18:42.664520 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-11 05:18:42.664524 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-11 05:18:42.664528 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-11 05:18:42.664532 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-11 05:18:42.664535 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-11 05:18:42.664551 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-11 05:18:42.664555 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:18:42.664559 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:18:42.664563 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:18:42.664567 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:18:42.664570 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:18:42.664574 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:18:42.664578 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:18:42.664582 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-11 05:18:42.664585 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-11 05:18:42.664589 | orchestrator | 2026-04-11 05:18:42.664593 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:18:42.664597 | orchestrator | Saturday 11 April 2026 05:18:24 +0000 (0:00:06.850) 0:08:20.264 ******** 2026-04-11 05:18:42.664600 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664604 | orchestrator | 2026-04-11 05:18:42.664608 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:18:42.664612 | orchestrator | Saturday 11 April 2026 05:18:25 +0000 (0:00:01.120) 0:08:21.384 ******** 2026-04-11 05:18:42.664616 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664619 | orchestrator | 2026-04-11 05:18:42.664623 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:18:42.664627 | orchestrator | Saturday 11 April 2026 05:18:26 +0000 (0:00:01.127) 0:08:22.511 ******** 2026-04-11 05:18:42.664631 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664635 | orchestrator | 2026-04-11 05:18:42.664638 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:18:42.664642 | orchestrator | Saturday 11 April 2026 05:18:27 +0000 (0:00:01.182) 0:08:23.694 ******** 2026-04-11 05:18:42.664646 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664650 | orchestrator | 2026-04-11 05:18:42.664653 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:18:42.664657 | orchestrator | Saturday 11 April 2026 05:18:28 +0000 (0:00:01.104) 0:08:24.799 ******** 2026-04-11 05:18:42.664661 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664665 | orchestrator | 2026-04-11 05:18:42.664668 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:18:42.664672 | orchestrator | Saturday 11 April 2026 05:18:29 +0000 (0:00:01.108) 0:08:25.908 ******** 2026-04-11 05:18:42.664680 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664683 | orchestrator | 2026-04-11 05:18:42.664687 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:18:42.664691 | orchestrator | Saturday 11 April 2026 05:18:30 +0000 (0:00:01.174) 0:08:27.082 ******** 2026-04-11 05:18:42.664695 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664699 | orchestrator | 2026-04-11 05:18:42.664702 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:18:42.664706 | orchestrator | Saturday 11 April 2026 05:18:32 +0000 (0:00:01.130) 0:08:28.213 ******** 2026-04-11 05:18:42.664710 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664714 | orchestrator | 2026-04-11 05:18:42.664718 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:18:42.664721 | orchestrator | Saturday 11 April 2026 05:18:33 +0000 (0:00:01.140) 0:08:29.353 ******** 2026-04-11 05:18:42.664725 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664729 | orchestrator | 2026-04-11 05:18:42.664733 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:18:42.664736 | orchestrator | Saturday 11 April 2026 05:18:34 +0000 (0:00:01.185) 0:08:30.538 ******** 2026-04-11 05:18:42.664740 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664744 | orchestrator | 2026-04-11 05:18:42.664750 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:18:42.664754 | orchestrator | Saturday 11 April 2026 05:18:35 +0000 (0:00:01.153) 0:08:31.691 ******** 2026-04-11 05:18:42.664758 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664761 | orchestrator | 2026-04-11 05:18:42.664765 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:18:42.664769 | orchestrator | Saturday 11 April 2026 05:18:36 +0000 (0:00:01.149) 0:08:32.841 ******** 2026-04-11 05:18:42.664773 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664776 | orchestrator | 2026-04-11 05:18:42.664780 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:18:42.664784 | orchestrator | Saturday 11 April 2026 05:18:37 +0000 (0:00:01.135) 0:08:33.976 ******** 2026-04-11 05:18:42.664788 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664792 | orchestrator | 2026-04-11 05:18:42.664795 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:18:42.664799 | orchestrator | Saturday 11 April 2026 05:18:39 +0000 (0:00:01.268) 0:08:35.245 ******** 2026-04-11 05:18:42.664803 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664807 | orchestrator | 2026-04-11 05:18:42.664810 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:18:42.664814 | orchestrator | Saturday 11 April 2026 05:18:40 +0000 (0:00:01.220) 0:08:36.465 ******** 2026-04-11 05:18:42.664818 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664822 | orchestrator | 2026-04-11 05:18:42.664826 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:18:42.664829 | orchestrator | Saturday 11 April 2026 05:18:41 +0000 (0:00:01.269) 0:08:37.735 ******** 2026-04-11 05:18:42.664833 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:18:42.664837 | orchestrator | 2026-04-11 05:18:42.664841 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:18:42.664847 | orchestrator | Saturday 11 April 2026 05:18:42 +0000 (0:00:01.132) 0:08:38.868 ******** 2026-04-11 05:19:39.941546 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.941691 | orchestrator | 2026-04-11 05:19:39.941719 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:19:39.941740 | orchestrator | Saturday 11 April 2026 05:18:43 +0000 (0:00:01.225) 0:08:40.093 ******** 2026-04-11 05:19:39.941761 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.941781 | orchestrator | 2026-04-11 05:19:39.941801 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:19:39.941853 | orchestrator | Saturday 11 April 2026 05:18:45 +0000 (0:00:01.150) 0:08:41.244 ******** 2026-04-11 05:19:39.941872 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.941891 | orchestrator | 2026-04-11 05:19:39.941909 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:19:39.941930 | orchestrator | Saturday 11 April 2026 05:18:46 +0000 (0:00:01.218) 0:08:42.463 ******** 2026-04-11 05:19:39.941948 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.941967 | orchestrator | 2026-04-11 05:19:39.941984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:19:39.942105 | orchestrator | Saturday 11 April 2026 05:18:47 +0000 (0:00:01.128) 0:08:43.592 ******** 2026-04-11 05:19:39.942132 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.942152 | orchestrator | 2026-04-11 05:19:39.942171 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:19:39.942189 | orchestrator | Saturday 11 April 2026 05:18:48 +0000 (0:00:01.178) 0:08:44.770 ******** 2026-04-11 05:19:39.942208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:19:39.942226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:19:39.942245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:19:39.942265 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.942284 | orchestrator | 2026-04-11 05:19:39.942305 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:19:39.942325 | orchestrator | Saturday 11 April 2026 05:18:50 +0000 (0:00:01.750) 0:08:46.521 ******** 2026-04-11 05:19:39.942345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:19:39.942366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:19:39.942386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:19:39.942405 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.942425 | orchestrator | 2026-04-11 05:19:39.942443 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:19:39.942462 | orchestrator | Saturday 11 April 2026 05:18:51 +0000 (0:00:01.453) 0:08:47.974 ******** 2026-04-11 05:19:39.942480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:19:39.942499 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:19:39.942518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:19:39.942537 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.942556 | orchestrator | 2026-04-11 05:19:39.942576 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:19:39.942589 | orchestrator | Saturday 11 April 2026 05:18:53 +0000 (0:00:01.416) 0:08:49.391 ******** 2026-04-11 05:19:39.942600 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.942610 | orchestrator | 2026-04-11 05:19:39.942621 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:19:39.942632 | orchestrator | Saturday 11 April 2026 05:18:54 +0000 (0:00:01.144) 0:08:50.536 ******** 2026-04-11 05:19:39.942644 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-11 05:19:39.942655 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.942666 | orchestrator | 2026-04-11 05:19:39.942677 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:19:39.942687 | orchestrator | Saturday 11 April 2026 05:18:55 +0000 (0:00:01.335) 0:08:51.871 ******** 2026-04-11 05:19:39.942706 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:19:39.942725 | orchestrator | 2026-04-11 05:19:39.942762 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:19:39.942782 | orchestrator | Saturday 11 April 2026 05:18:57 +0000 (0:00:01.861) 0:08:53.733 ******** 2026-04-11 05:19:39.942801 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.942821 | orchestrator | 2026-04-11 05:19:39.942839 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-11 05:19:39.942867 | orchestrator | Saturday 11 April 2026 05:18:58 +0000 (0:00:01.179) 0:08:54.913 ******** 2026-04-11 05:19:39.942878 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-04-11 05:19:39.942890 | orchestrator | 2026-04-11 05:19:39.942901 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-11 05:19:39.942912 | orchestrator | Saturday 11 April 2026 05:19:00 +0000 (0:00:01.536) 0:08:56.449 ******** 2026-04-11 05:19:39.942922 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:19:39.942933 | orchestrator | 2026-04-11 05:19:39.942944 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-11 05:19:39.942955 | orchestrator | Saturday 11 April 2026 05:19:03 +0000 (0:00:03.281) 0:08:59.731 ******** 2026-04-11 05:19:39.942965 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.942976 | orchestrator | 2026-04-11 05:19:39.942987 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-11 05:19:39.942998 | orchestrator | Saturday 11 April 2026 05:19:04 +0000 (0:00:01.184) 0:09:00.915 ******** 2026-04-11 05:19:39.943067 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943078 | orchestrator | 2026-04-11 05:19:39.943089 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-11 05:19:39.943100 | orchestrator | Saturday 11 April 2026 05:19:05 +0000 (0:00:01.163) 0:09:02.078 ******** 2026-04-11 05:19:39.943111 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943121 | orchestrator | 2026-04-11 05:19:39.943157 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-11 05:19:39.943190 | orchestrator | Saturday 11 April 2026 05:19:07 +0000 (0:00:01.174) 0:09:03.253 ******** 2026-04-11 05:19:39.943202 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:19:39.943213 | orchestrator | 2026-04-11 05:19:39.943224 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-11 05:19:39.943235 | orchestrator | Saturday 11 April 2026 05:19:09 +0000 (0:00:02.016) 0:09:05.269 ******** 2026-04-11 05:19:39.943245 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943256 | orchestrator | 2026-04-11 05:19:39.943267 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-11 05:19:39.943278 | orchestrator | Saturday 11 April 2026 05:19:10 +0000 (0:00:01.640) 0:09:06.910 ******** 2026-04-11 05:19:39.943289 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943299 | orchestrator | 2026-04-11 05:19:39.943310 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-11 05:19:39.943334 | orchestrator | Saturday 11 April 2026 05:19:12 +0000 (0:00:01.484) 0:09:08.395 ******** 2026-04-11 05:19:39.943356 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943368 | orchestrator | 2026-04-11 05:19:39.943379 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-11 05:19:39.943390 | orchestrator | Saturday 11 April 2026 05:19:13 +0000 (0:00:01.571) 0:09:09.967 ******** 2026-04-11 05:19:39.943401 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943411 | orchestrator | 2026-04-11 05:19:39.943422 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-11 05:19:39.943433 | orchestrator | Saturday 11 April 2026 05:19:15 +0000 (0:00:01.787) 0:09:11.754 ******** 2026-04-11 05:19:39.943444 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943455 | orchestrator | 2026-04-11 05:19:39.943474 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-11 05:19:39.943493 | orchestrator | Saturday 11 April 2026 05:19:17 +0000 (0:00:01.814) 0:09:13.568 ******** 2026-04-11 05:19:39.943512 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 05:19:39.943531 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 05:19:39.943551 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 05:19:39.943571 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-11 05:19:39.943591 | orchestrator | 2026-04-11 05:19:39.943620 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-11 05:19:39.943632 | orchestrator | Saturday 11 April 2026 05:19:21 +0000 (0:00:03.939) 0:09:17.508 ******** 2026-04-11 05:19:39.943643 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:19:39.943653 | orchestrator | 2026-04-11 05:19:39.943664 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-11 05:19:39.943675 | orchestrator | Saturday 11 April 2026 05:19:23 +0000 (0:00:02.038) 0:09:19.547 ******** 2026-04-11 05:19:39.943685 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943696 | orchestrator | 2026-04-11 05:19:39.943707 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-11 05:19:39.943717 | orchestrator | Saturday 11 April 2026 05:19:24 +0000 (0:00:01.180) 0:09:20.727 ******** 2026-04-11 05:19:39.943728 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943739 | orchestrator | 2026-04-11 05:19:39.943749 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-11 05:19:39.943760 | orchestrator | Saturday 11 April 2026 05:19:25 +0000 (0:00:01.158) 0:09:21.885 ******** 2026-04-11 05:19:39.943770 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943781 | orchestrator | 2026-04-11 05:19:39.943792 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-11 05:19:39.943802 | orchestrator | Saturday 11 April 2026 05:19:27 +0000 (0:00:02.052) 0:09:23.938 ******** 2026-04-11 05:19:39.943813 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.943824 | orchestrator | 2026-04-11 05:19:39.943834 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-11 05:19:39.943845 | orchestrator | Saturday 11 April 2026 05:19:29 +0000 (0:00:01.482) 0:09:25.420 ******** 2026-04-11 05:19:39.943856 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.943867 | orchestrator | 2026-04-11 05:19:39.943877 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-11 05:19:39.943895 | orchestrator | Saturday 11 April 2026 05:19:30 +0000 (0:00:01.156) 0:09:26.577 ******** 2026-04-11 05:19:39.943906 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-04-11 05:19:39.943917 | orchestrator | 2026-04-11 05:19:39.943927 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-11 05:19:39.943938 | orchestrator | Saturday 11 April 2026 05:19:31 +0000 (0:00:01.479) 0:09:28.057 ******** 2026-04-11 05:19:39.943949 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.943959 | orchestrator | 2026-04-11 05:19:39.943970 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-11 05:19:39.943980 | orchestrator | Saturday 11 April 2026 05:19:32 +0000 (0:00:01.121) 0:09:29.178 ******** 2026-04-11 05:19:39.943991 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:19:39.944047 | orchestrator | 2026-04-11 05:19:39.944062 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-11 05:19:39.944073 | orchestrator | Saturday 11 April 2026 05:19:34 +0000 (0:00:01.120) 0:09:30.299 ******** 2026-04-11 05:19:39.944083 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-04-11 05:19:39.944094 | orchestrator | 2026-04-11 05:19:39.944105 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-11 05:19:39.944116 | orchestrator | Saturday 11 April 2026 05:19:35 +0000 (0:00:01.488) 0:09:31.787 ******** 2026-04-11 05:19:39.944126 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.944137 | orchestrator | 2026-04-11 05:19:39.944148 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-11 05:19:39.944158 | orchestrator | Saturday 11 April 2026 05:19:37 +0000 (0:00:02.413) 0:09:34.200 ******** 2026-04-11 05:19:39.944169 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:19:39.944180 | orchestrator | 2026-04-11 05:19:39.944190 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-11 05:19:39.944210 | orchestrator | Saturday 11 April 2026 05:19:39 +0000 (0:00:01.943) 0:09:36.144 ******** 2026-04-11 05:20:30.097242 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:20:30.097426 | orchestrator | 2026-04-11 05:20:30.097456 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-11 05:20:30.097479 | orchestrator | Saturday 11 April 2026 05:19:42 +0000 (0:00:02.499) 0:09:38.644 ******** 2026-04-11 05:20:30.097498 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:20:30.097520 | orchestrator | 2026-04-11 05:20:30.097532 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-11 05:20:30.097543 | orchestrator | Saturday 11 April 2026 05:19:45 +0000 (0:00:03.217) 0:09:41.862 ******** 2026-04-11 05:20:30.097554 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-04-11 05:20:30.097565 | orchestrator | 2026-04-11 05:20:30.097577 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-11 05:20:30.097588 | orchestrator | Saturday 11 April 2026 05:19:47 +0000 (0:00:01.616) 0:09:43.478 ******** 2026-04-11 05:20:30.097599 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:20:30.097610 | orchestrator | 2026-04-11 05:20:30.097621 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-11 05:20:30.097632 | orchestrator | Saturday 11 April 2026 05:19:49 +0000 (0:00:02.233) 0:09:45.712 ******** 2026-04-11 05:20:30.097643 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:20:30.097653 | orchestrator | 2026-04-11 05:20:30.097664 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-11 05:20:30.097675 | orchestrator | Saturday 11 April 2026 05:19:52 +0000 (0:00:03.003) 0:09:48.715 ******** 2026-04-11 05:20:30.097686 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.097697 | orchestrator | 2026-04-11 05:20:30.097708 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-11 05:20:30.097719 | orchestrator | Saturday 11 April 2026 05:19:53 +0000 (0:00:01.107) 0:09:49.823 ******** 2026-04-11 05:20:30.097732 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-11 05:20:30.097746 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-11 05:20:30.097760 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-11 05:20:30.097780 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-11 05:20:30.097820 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-11 05:20:30.097843 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}])  2026-04-11 05:20:30.097887 | orchestrator | 2026-04-11 05:20:30.097901 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-11 05:20:30.097912 | orchestrator | Saturday 11 April 2026 05:20:03 +0000 (0:00:10.053) 0:09:59.876 ******** 2026-04-11 05:20:30.097923 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:20:30.097933 | orchestrator | 2026-04-11 05:20:30.097944 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:20:30.097955 | orchestrator | Saturday 11 April 2026 05:20:06 +0000 (0:00:02.532) 0:10:02.408 ******** 2026-04-11 05:20:30.097966 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:20:30.097977 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 05:20:30.097988 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 05:20:30.097999 | orchestrator | 2026-04-11 05:20:30.098149 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:20:30.098165 | orchestrator | Saturday 11 April 2026 05:20:08 +0000 (0:00:02.021) 0:10:04.430 ******** 2026-04-11 05:20:30.098176 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 05:20:30.098187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 05:20:30.098198 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 05:20:30.098210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098221 | orchestrator | 2026-04-11 05:20:30.098231 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-11 05:20:30.098242 | orchestrator | Saturday 11 April 2026 05:20:09 +0000 (0:00:01.355) 0:10:05.786 ******** 2026-04-11 05:20:30.098254 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098265 | orchestrator | 2026-04-11 05:20:30.098276 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-11 05:20:30.098287 | orchestrator | Saturday 11 April 2026 05:20:10 +0000 (0:00:01.120) 0:10:06.906 ******** 2026-04-11 05:20:30.098298 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:20:30.098310 | orchestrator | 2026-04-11 05:20:30.098321 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 05:20:30.098332 | orchestrator | Saturday 11 April 2026 05:20:12 +0000 (0:00:02.275) 0:10:09.182 ******** 2026-04-11 05:20:30.098343 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098354 | orchestrator | 2026-04-11 05:20:30.098365 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 05:20:30.098376 | orchestrator | Saturday 11 April 2026 05:20:14 +0000 (0:00:01.148) 0:10:10.331 ******** 2026-04-11 05:20:30.098387 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098398 | orchestrator | 2026-04-11 05:20:30.098409 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 05:20:30.098420 | orchestrator | Saturday 11 April 2026 05:20:15 +0000 (0:00:01.184) 0:10:11.515 ******** 2026-04-11 05:20:30.098431 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098442 | orchestrator | 2026-04-11 05:20:30.098453 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 05:20:30.098464 | orchestrator | Saturday 11 April 2026 05:20:16 +0000 (0:00:01.122) 0:10:12.638 ******** 2026-04-11 05:20:30.098475 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098486 | orchestrator | 2026-04-11 05:20:30.098497 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 05:20:30.098508 | orchestrator | Saturday 11 April 2026 05:20:17 +0000 (0:00:01.149) 0:10:13.788 ******** 2026-04-11 05:20:30.098519 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098530 | orchestrator | 2026-04-11 05:20:30.098541 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-11 05:20:30.098564 | orchestrator | Saturday 11 April 2026 05:20:18 +0000 (0:00:01.181) 0:10:14.969 ******** 2026-04-11 05:20:30.098575 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098586 | orchestrator | 2026-04-11 05:20:30.098597 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 05:20:30.098608 | orchestrator | Saturday 11 April 2026 05:20:19 +0000 (0:00:01.118) 0:10:16.088 ******** 2026-04-11 05:20:30.098619 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:20:30.098630 | orchestrator | 2026-04-11 05:20:30.098641 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-11 05:20:30.098652 | orchestrator | 2026-04-11 05:20:30.098663 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-11 05:20:30.098674 | orchestrator | Saturday 11 April 2026 05:20:20 +0000 (0:00:00.973) 0:10:17.062 ******** 2026-04-11 05:20:30.098685 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:30.098696 | orchestrator | 2026-04-11 05:20:30.098707 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-11 05:20:30.098718 | orchestrator | Saturday 11 April 2026 05:20:22 +0000 (0:00:01.173) 0:10:18.235 ******** 2026-04-11 05:20:30.098730 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:30.098741 | orchestrator | 2026-04-11 05:20:30.098752 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-11 05:20:30.098770 | orchestrator | Saturday 11 April 2026 05:20:22 +0000 (0:00:00.802) 0:10:19.038 ******** 2026-04-11 05:20:30.098782 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:20:30.098793 | orchestrator | 2026-04-11 05:20:30.098804 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-11 05:20:30.098815 | orchestrator | Saturday 11 April 2026 05:20:23 +0000 (0:00:00.850) 0:10:19.888 ******** 2026-04-11 05:20:30.098826 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:30.098837 | orchestrator | 2026-04-11 05:20:30.098848 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:20:30.098859 | orchestrator | Saturday 11 April 2026 05:20:24 +0000 (0:00:00.829) 0:10:20.718 ******** 2026-04-11 05:20:30.098870 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-11 05:20:30.098881 | orchestrator | 2026-04-11 05:20:30.098892 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:20:30.098903 | orchestrator | Saturday 11 April 2026 05:20:25 +0000 (0:00:01.263) 0:10:21.981 ******** 2026-04-11 05:20:30.098914 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:30.098925 | orchestrator | 2026-04-11 05:20:30.098936 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:20:30.098947 | orchestrator | Saturday 11 April 2026 05:20:27 +0000 (0:00:01.503) 0:10:23.485 ******** 2026-04-11 05:20:30.098958 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:30.098969 | orchestrator | 2026-04-11 05:20:30.098980 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:20:30.098991 | orchestrator | Saturday 11 April 2026 05:20:28 +0000 (0:00:01.161) 0:10:24.647 ******** 2026-04-11 05:20:30.099002 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:30.099046 | orchestrator | 2026-04-11 05:20:30.099058 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:20:30.099069 | orchestrator | Saturday 11 April 2026 05:20:29 +0000 (0:00:01.500) 0:10:26.148 ******** 2026-04-11 05:20:30.099080 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:30.099091 | orchestrator | 2026-04-11 05:20:30.099109 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:20:55.428666 | orchestrator | Saturday 11 April 2026 05:20:31 +0000 (0:00:01.163) 0:10:27.311 ******** 2026-04-11 05:20:55.428784 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:55.428802 | orchestrator | 2026-04-11 05:20:55.428814 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:20:55.428826 | orchestrator | Saturday 11 April 2026 05:20:32 +0000 (0:00:01.128) 0:10:28.440 ******** 2026-04-11 05:20:55.428837 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:55.428873 | orchestrator | 2026-04-11 05:20:55.428885 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:20:55.428896 | orchestrator | Saturday 11 April 2026 05:20:33 +0000 (0:00:01.164) 0:10:29.604 ******** 2026-04-11 05:20:55.428907 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:20:55.428919 | orchestrator | 2026-04-11 05:20:55.428931 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:20:55.428942 | orchestrator | Saturday 11 April 2026 05:20:34 +0000 (0:00:01.123) 0:10:30.727 ******** 2026-04-11 05:20:55.428952 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:55.428963 | orchestrator | 2026-04-11 05:20:55.428974 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:20:55.428985 | orchestrator | Saturday 11 April 2026 05:20:35 +0000 (0:00:01.135) 0:10:31.863 ******** 2026-04-11 05:20:55.428997 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:20:55.429008 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:20:55.429081 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:20:55.429092 | orchestrator | 2026-04-11 05:20:55.429103 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:20:55.429114 | orchestrator | Saturday 11 April 2026 05:20:37 +0000 (0:00:01.975) 0:10:33.838 ******** 2026-04-11 05:20:55.429125 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:55.429136 | orchestrator | 2026-04-11 05:20:55.429147 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:20:55.429158 | orchestrator | Saturday 11 April 2026 05:20:38 +0000 (0:00:01.277) 0:10:35.116 ******** 2026-04-11 05:20:55.429168 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:20:55.429179 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:20:55.429190 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:20:55.429203 | orchestrator | 2026-04-11 05:20:55.429215 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:20:55.429228 | orchestrator | Saturday 11 April 2026 05:20:42 +0000 (0:00:03.203) 0:10:38.319 ******** 2026-04-11 05:20:55.429241 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 05:20:55.429253 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 05:20:55.429266 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 05:20:55.429278 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:20:55.429291 | orchestrator | 2026-04-11 05:20:55.429304 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:20:55.429315 | orchestrator | Saturday 11 April 2026 05:20:43 +0000 (0:00:01.808) 0:10:40.128 ******** 2026-04-11 05:20:55.429327 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:20:55.429356 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:20:55.429368 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:20:55.429379 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:20:55.429390 | orchestrator | 2026-04-11 05:20:55.429401 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:20:55.429412 | orchestrator | Saturday 11 April 2026 05:20:45 +0000 (0:00:02.035) 0:10:42.164 ******** 2026-04-11 05:20:55.429433 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:20:55.429449 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:20:55.429480 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:20:55.429492 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:20:55.429503 | orchestrator | 2026-04-11 05:20:55.429514 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:20:55.429525 | orchestrator | Saturday 11 April 2026 05:20:47 +0000 (0:00:01.185) 0:10:43.350 ******** 2026-04-11 05:20:55.429538 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:20:39.410193', 'end': '2026-04-11 05:20:39.464374', 'delta': '0:00:00.054181', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:20:55.429553 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '1a56ecc96cb4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:20:40.319891', 'end': '2026-04-11 05:20:40.366003', 'delta': '0:00:00.046112', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1a56ecc96cb4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:20:55.429571 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f023dde40a6c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:20:40.897835', 'end': '2026-04-11 05:20:40.953899', 'delta': '0:00:00.056064', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f023dde40a6c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:20:55.429583 | orchestrator | 2026-04-11 05:20:55.429594 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:20:55.429612 | orchestrator | Saturday 11 April 2026 05:20:48 +0000 (0:00:01.275) 0:10:44.626 ******** 2026-04-11 05:20:55.429624 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:55.429634 | orchestrator | 2026-04-11 05:20:55.429645 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:20:55.429656 | orchestrator | Saturday 11 April 2026 05:20:49 +0000 (0:00:01.233) 0:10:45.860 ******** 2026-04-11 05:20:55.429667 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:20:55.429678 | orchestrator | 2026-04-11 05:20:55.429688 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:20:55.429699 | orchestrator | Saturday 11 April 2026 05:20:50 +0000 (0:00:01.250) 0:10:47.110 ******** 2026-04-11 05:20:55.429710 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:55.429721 | orchestrator | 2026-04-11 05:20:55.429731 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:20:55.429742 | orchestrator | Saturday 11 April 2026 05:20:52 +0000 (0:00:01.177) 0:10:48.288 ******** 2026-04-11 05:20:55.429753 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:20:55.429764 | orchestrator | 2026-04-11 05:20:55.429774 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:20:55.429785 | orchestrator | Saturday 11 April 2026 05:20:54 +0000 (0:00:01.993) 0:10:50.282 ******** 2026-04-11 05:20:55.429796 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:20:55.429807 | orchestrator | 2026-04-11 05:20:55.429817 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:20:55.429828 | orchestrator | Saturday 11 April 2026 05:20:55 +0000 (0:00:01.199) 0:10:51.481 ******** 2026-04-11 05:20:55.429839 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:20:55.429850 | orchestrator | 2026-04-11 05:20:55.429866 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:21:07.002007 | orchestrator | Saturday 11 April 2026 05:20:56 +0000 (0:00:01.138) 0:10:52.620 ******** 2026-04-11 05:21:07.002227 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002245 | orchestrator | 2026-04-11 05:21:07.002258 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:21:07.002270 | orchestrator | Saturday 11 April 2026 05:20:57 +0000 (0:00:01.259) 0:10:53.879 ******** 2026-04-11 05:21:07.002281 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002292 | orchestrator | 2026-04-11 05:21:07.002303 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:21:07.002314 | orchestrator | Saturday 11 April 2026 05:20:58 +0000 (0:00:01.172) 0:10:55.051 ******** 2026-04-11 05:21:07.002325 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002336 | orchestrator | 2026-04-11 05:21:07.002347 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:21:07.002358 | orchestrator | Saturday 11 April 2026 05:20:59 +0000 (0:00:01.110) 0:10:56.162 ******** 2026-04-11 05:21:07.002368 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002379 | orchestrator | 2026-04-11 05:21:07.002390 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:21:07.002401 | orchestrator | Saturday 11 April 2026 05:21:01 +0000 (0:00:01.119) 0:10:57.281 ******** 2026-04-11 05:21:07.002412 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002423 | orchestrator | 2026-04-11 05:21:07.002434 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:21:07.002444 | orchestrator | Saturday 11 April 2026 05:21:02 +0000 (0:00:01.131) 0:10:58.413 ******** 2026-04-11 05:21:07.002455 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002466 | orchestrator | 2026-04-11 05:21:07.002477 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:21:07.002488 | orchestrator | Saturday 11 April 2026 05:21:03 +0000 (0:00:01.155) 0:10:59.569 ******** 2026-04-11 05:21:07.002499 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002510 | orchestrator | 2026-04-11 05:21:07.002547 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:21:07.002561 | orchestrator | Saturday 11 April 2026 05:21:04 +0000 (0:00:01.147) 0:11:00.716 ******** 2026-04-11 05:21:07.002574 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002587 | orchestrator | 2026-04-11 05:21:07.002600 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:21:07.002613 | orchestrator | Saturday 11 April 2026 05:21:05 +0000 (0:00:01.145) 0:11:01.862 ******** 2026-04-11 05:21:07.002629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:21:07.002645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:21:07.002672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:21:07.002689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:21:07.002706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:21:07.002739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:21:07.002753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:21:07.002778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c2a3b65', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:21:07.002804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:21:07.002818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:21:07.002832 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:07.002845 | orchestrator | 2026-04-11 05:21:07.002858 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:21:07.002872 | orchestrator | Saturday 11 April 2026 05:21:06 +0000 (0:00:01.275) 0:11:03.138 ******** 2026-04-11 05:21:07.002894 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.236990 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237159 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237354 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237385 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237398 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237409 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237449 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c2a3b65', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237481 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237494 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:21:12.237506 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:12.237520 | orchestrator | 2026-04-11 05:21:12.237535 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:21:12.237549 | orchestrator | Saturday 11 April 2026 05:21:08 +0000 (0:00:01.229) 0:11:04.367 ******** 2026-04-11 05:21:12.237562 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:21:12.237575 | orchestrator | 2026-04-11 05:21:12.237588 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:21:12.237601 | orchestrator | Saturday 11 April 2026 05:21:09 +0000 (0:00:01.462) 0:11:05.830 ******** 2026-04-11 05:21:12.237614 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:21:12.237626 | orchestrator | 2026-04-11 05:21:12.237638 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:21:12.237658 | orchestrator | Saturday 11 April 2026 05:21:10 +0000 (0:00:01.130) 0:11:06.960 ******** 2026-04-11 05:21:12.237670 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:21:12.237683 | orchestrator | 2026-04-11 05:21:12.237696 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:21:12.237716 | orchestrator | Saturday 11 April 2026 05:21:12 +0000 (0:00:01.483) 0:11:08.444 ******** 2026-04-11 05:21:51.831182 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.831301 | orchestrator | 2026-04-11 05:21:51.831320 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:21:51.831333 | orchestrator | Saturday 11 April 2026 05:21:13 +0000 (0:00:01.144) 0:11:09.589 ******** 2026-04-11 05:21:51.831345 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.831356 | orchestrator | 2026-04-11 05:21:51.831368 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:21:51.831379 | orchestrator | Saturday 11 April 2026 05:21:14 +0000 (0:00:01.232) 0:11:10.822 ******** 2026-04-11 05:21:51.831390 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.831401 | orchestrator | 2026-04-11 05:21:51.831412 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:21:51.831424 | orchestrator | Saturday 11 April 2026 05:21:15 +0000 (0:00:01.132) 0:11:11.954 ******** 2026-04-11 05:21:51.831435 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-11 05:21:51.831447 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:21:51.831458 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-11 05:21:51.831469 | orchestrator | 2026-04-11 05:21:51.831480 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:21:51.831491 | orchestrator | Saturday 11 April 2026 05:21:17 +0000 (0:00:01.997) 0:11:13.952 ******** 2026-04-11 05:21:51.831502 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 05:21:51.831513 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 05:21:51.831524 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 05:21:51.831535 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.831546 | orchestrator | 2026-04-11 05:21:51.831557 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:21:51.831568 | orchestrator | Saturday 11 April 2026 05:21:18 +0000 (0:00:01.224) 0:11:15.176 ******** 2026-04-11 05:21:51.831579 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.831590 | orchestrator | 2026-04-11 05:21:51.831601 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:21:51.831613 | orchestrator | Saturday 11 April 2026 05:21:20 +0000 (0:00:01.156) 0:11:16.333 ******** 2026-04-11 05:21:51.831624 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:21:51.831637 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:21:51.831648 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:21:51.831661 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:21:51.831674 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:21:51.831687 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:21:51.831700 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:21:51.831713 | orchestrator | 2026-04-11 05:21:51.831741 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:21:51.831754 | orchestrator | Saturday 11 April 2026 05:21:21 +0000 (0:00:01.835) 0:11:18.168 ******** 2026-04-11 05:21:51.831767 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:21:51.831780 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:21:51.831816 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:21:51.831828 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:21:51.831838 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:21:51.831849 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:21:51.831860 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:21:51.831871 | orchestrator | 2026-04-11 05:21:51.831882 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-11 05:21:51.831908 | orchestrator | Saturday 11 April 2026 05:21:24 +0000 (0:00:02.182) 0:11:20.351 ******** 2026-04-11 05:21:51.831919 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.831930 | orchestrator | 2026-04-11 05:21:51.831941 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-11 05:21:51.831952 | orchestrator | Saturday 11 April 2026 05:21:25 +0000 (0:00:00.904) 0:11:21.256 ******** 2026-04-11 05:21:51.831963 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.831974 | orchestrator | 2026-04-11 05:21:51.831985 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-11 05:21:51.831996 | orchestrator | Saturday 11 April 2026 05:21:25 +0000 (0:00:00.933) 0:11:22.189 ******** 2026-04-11 05:21:51.832007 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832018 | orchestrator | 2026-04-11 05:21:51.832048 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-11 05:21:51.832060 | orchestrator | Saturday 11 April 2026 05:21:26 +0000 (0:00:00.813) 0:11:23.002 ******** 2026-04-11 05:21:51.832071 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832082 | orchestrator | 2026-04-11 05:21:51.832093 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-11 05:21:51.832104 | orchestrator | Saturday 11 April 2026 05:21:27 +0000 (0:00:00.864) 0:11:23.867 ******** 2026-04-11 05:21:51.832115 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832126 | orchestrator | 2026-04-11 05:21:51.832137 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-11 05:21:51.832148 | orchestrator | Saturday 11 April 2026 05:21:28 +0000 (0:00:00.915) 0:11:24.783 ******** 2026-04-11 05:21:51.832178 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 05:21:51.832190 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 05:21:51.832201 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 05:21:51.832212 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832223 | orchestrator | 2026-04-11 05:21:51.832234 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-11 05:21:51.832244 | orchestrator | Saturday 11 April 2026 05:21:29 +0000 (0:00:01.039) 0:11:25.823 ******** 2026-04-11 05:21:51.832255 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-11 05:21:51.832266 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-11 05:21:51.832276 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-11 05:21:51.832287 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-11 05:21:51.832298 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-11 05:21:51.832308 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-11 05:21:51.832319 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832330 | orchestrator | 2026-04-11 05:21:51.832340 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-11 05:21:51.832351 | orchestrator | Saturday 11 April 2026 05:21:31 +0000 (0:00:01.692) 0:11:27.515 ******** 2026-04-11 05:21:51.832362 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:21:51.832382 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:21:51.832393 | orchestrator | 2026-04-11 05:21:51.832403 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-11 05:21:51.832414 | orchestrator | Saturday 11 April 2026 05:21:34 +0000 (0:00:03.232) 0:11:30.747 ******** 2026-04-11 05:21:51.832425 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:21:51.832435 | orchestrator | 2026-04-11 05:21:51.832446 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:21:51.832457 | orchestrator | Saturday 11 April 2026 05:21:36 +0000 (0:00:02.160) 0:11:32.908 ******** 2026-04-11 05:21:51.832467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-11 05:21:51.832479 | orchestrator | 2026-04-11 05:21:51.832489 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:21:51.832500 | orchestrator | Saturday 11 April 2026 05:21:37 +0000 (0:00:01.199) 0:11:34.108 ******** 2026-04-11 05:21:51.832511 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-11 05:21:51.832521 | orchestrator | 2026-04-11 05:21:51.832532 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:21:51.832543 | orchestrator | Saturday 11 April 2026 05:21:39 +0000 (0:00:01.131) 0:11:35.240 ******** 2026-04-11 05:21:51.832554 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:21:51.832565 | orchestrator | 2026-04-11 05:21:51.832581 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:21:51.832592 | orchestrator | Saturday 11 April 2026 05:21:40 +0000 (0:00:01.521) 0:11:36.761 ******** 2026-04-11 05:21:51.832603 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832614 | orchestrator | 2026-04-11 05:21:51.832625 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:21:51.832636 | orchestrator | Saturday 11 April 2026 05:21:41 +0000 (0:00:01.112) 0:11:37.874 ******** 2026-04-11 05:21:51.832647 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832657 | orchestrator | 2026-04-11 05:21:51.832668 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:21:51.832679 | orchestrator | Saturday 11 April 2026 05:21:42 +0000 (0:00:01.098) 0:11:38.972 ******** 2026-04-11 05:21:51.832689 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832700 | orchestrator | 2026-04-11 05:21:51.832711 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:21:51.832722 | orchestrator | Saturday 11 April 2026 05:21:43 +0000 (0:00:01.143) 0:11:40.116 ******** 2026-04-11 05:21:51.832732 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:21:51.832743 | orchestrator | 2026-04-11 05:21:51.832753 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:21:51.832764 | orchestrator | Saturday 11 April 2026 05:21:45 +0000 (0:00:01.609) 0:11:41.726 ******** 2026-04-11 05:21:51.832775 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832785 | orchestrator | 2026-04-11 05:21:51.832796 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:21:51.832807 | orchestrator | Saturday 11 April 2026 05:21:46 +0000 (0:00:01.104) 0:11:42.830 ******** 2026-04-11 05:21:51.832817 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832828 | orchestrator | 2026-04-11 05:21:51.832839 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:21:51.832849 | orchestrator | Saturday 11 April 2026 05:21:47 +0000 (0:00:01.136) 0:11:43.967 ******** 2026-04-11 05:21:51.832860 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:21:51.832871 | orchestrator | 2026-04-11 05:21:51.832881 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:21:51.832892 | orchestrator | Saturday 11 April 2026 05:21:49 +0000 (0:00:01.565) 0:11:45.533 ******** 2026-04-11 05:21:51.832902 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:21:51.832913 | orchestrator | 2026-04-11 05:21:51.832924 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:21:51.832942 | orchestrator | Saturday 11 April 2026 05:21:50 +0000 (0:00:01.638) 0:11:47.172 ******** 2026-04-11 05:21:51.832953 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:21:51.832963 | orchestrator | 2026-04-11 05:21:51.832974 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:21:51.832985 | orchestrator | Saturday 11 April 2026 05:21:51 +0000 (0:00:00.786) 0:11:47.959 ******** 2026-04-11 05:21:51.833002 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:22:30.515461 | orchestrator | 2026-04-11 05:22:30.515558 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:22:30.515566 | orchestrator | Saturday 11 April 2026 05:21:52 +0000 (0:00:00.821) 0:11:48.781 ******** 2026-04-11 05:22:30.515571 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515577 | orchestrator | 2026-04-11 05:22:30.515581 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:22:30.515585 | orchestrator | Saturday 11 April 2026 05:21:53 +0000 (0:00:00.757) 0:11:49.539 ******** 2026-04-11 05:22:30.515590 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515593 | orchestrator | 2026-04-11 05:22:30.515597 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:22:30.515601 | orchestrator | Saturday 11 April 2026 05:21:54 +0000 (0:00:00.793) 0:11:50.332 ******** 2026-04-11 05:22:30.515605 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515609 | orchestrator | 2026-04-11 05:22:30.515613 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:22:30.515616 | orchestrator | Saturday 11 April 2026 05:21:54 +0000 (0:00:00.786) 0:11:51.119 ******** 2026-04-11 05:22:30.515620 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515624 | orchestrator | 2026-04-11 05:22:30.515628 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:22:30.515631 | orchestrator | Saturday 11 April 2026 05:21:55 +0000 (0:00:00.813) 0:11:51.933 ******** 2026-04-11 05:22:30.515635 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515639 | orchestrator | 2026-04-11 05:22:30.515643 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:22:30.515646 | orchestrator | Saturday 11 April 2026 05:21:56 +0000 (0:00:00.788) 0:11:52.721 ******** 2026-04-11 05:22:30.515650 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:22:30.515655 | orchestrator | 2026-04-11 05:22:30.515659 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:22:30.515662 | orchestrator | Saturday 11 April 2026 05:21:57 +0000 (0:00:00.784) 0:11:53.506 ******** 2026-04-11 05:22:30.515666 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:22:30.515670 | orchestrator | 2026-04-11 05:22:30.515674 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:22:30.515677 | orchestrator | Saturday 11 April 2026 05:21:58 +0000 (0:00:00.811) 0:11:54.317 ******** 2026-04-11 05:22:30.515681 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:22:30.515685 | orchestrator | 2026-04-11 05:22:30.515689 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:22:30.515692 | orchestrator | Saturday 11 April 2026 05:21:58 +0000 (0:00:00.787) 0:11:55.105 ******** 2026-04-11 05:22:30.515696 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515700 | orchestrator | 2026-04-11 05:22:30.515704 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:22:30.515708 | orchestrator | Saturday 11 April 2026 05:21:59 +0000 (0:00:00.784) 0:11:55.889 ******** 2026-04-11 05:22:30.515712 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515715 | orchestrator | 2026-04-11 05:22:30.515719 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:22:30.515723 | orchestrator | Saturday 11 April 2026 05:22:00 +0000 (0:00:00.778) 0:11:56.667 ******** 2026-04-11 05:22:30.515738 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515742 | orchestrator | 2026-04-11 05:22:30.515746 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:22:30.515763 | orchestrator | Saturday 11 April 2026 05:22:01 +0000 (0:00:00.786) 0:11:57.454 ******** 2026-04-11 05:22:30.515767 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515771 | orchestrator | 2026-04-11 05:22:30.515775 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:22:30.515779 | orchestrator | Saturday 11 April 2026 05:22:02 +0000 (0:00:00.835) 0:11:58.289 ******** 2026-04-11 05:22:30.515782 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515786 | orchestrator | 2026-04-11 05:22:30.515790 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:22:30.515794 | orchestrator | Saturday 11 April 2026 05:22:02 +0000 (0:00:00.780) 0:11:59.069 ******** 2026-04-11 05:22:30.515798 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515801 | orchestrator | 2026-04-11 05:22:30.515805 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:22:30.515809 | orchestrator | Saturday 11 April 2026 05:22:03 +0000 (0:00:00.748) 0:11:59.818 ******** 2026-04-11 05:22:30.515812 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515816 | orchestrator | 2026-04-11 05:22:30.515820 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:22:30.515825 | orchestrator | Saturday 11 April 2026 05:22:04 +0000 (0:00:00.780) 0:12:00.598 ******** 2026-04-11 05:22:30.515828 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515832 | orchestrator | 2026-04-11 05:22:30.515836 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:22:30.515840 | orchestrator | Saturday 11 April 2026 05:22:05 +0000 (0:00:00.793) 0:12:01.392 ******** 2026-04-11 05:22:30.515843 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515847 | orchestrator | 2026-04-11 05:22:30.515851 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:22:30.515854 | orchestrator | Saturday 11 April 2026 05:22:05 +0000 (0:00:00.795) 0:12:02.187 ******** 2026-04-11 05:22:30.515858 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515862 | orchestrator | 2026-04-11 05:22:30.515866 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:22:30.515869 | orchestrator | Saturday 11 April 2026 05:22:06 +0000 (0:00:00.821) 0:12:03.008 ******** 2026-04-11 05:22:30.515873 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515877 | orchestrator | 2026-04-11 05:22:30.515881 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:22:30.515884 | orchestrator | Saturday 11 April 2026 05:22:07 +0000 (0:00:00.755) 0:12:03.764 ******** 2026-04-11 05:22:30.515888 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515892 | orchestrator | 2026-04-11 05:22:30.515906 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:22:30.515910 | orchestrator | Saturday 11 April 2026 05:22:08 +0000 (0:00:00.772) 0:12:04.537 ******** 2026-04-11 05:22:30.515914 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:22:30.515918 | orchestrator | 2026-04-11 05:22:30.515922 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:22:30.515925 | orchestrator | Saturday 11 April 2026 05:22:09 +0000 (0:00:01.558) 0:12:06.095 ******** 2026-04-11 05:22:30.515929 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:22:30.515933 | orchestrator | 2026-04-11 05:22:30.515937 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:22:30.515940 | orchestrator | Saturday 11 April 2026 05:22:11 +0000 (0:00:02.020) 0:12:08.115 ******** 2026-04-11 05:22:30.515944 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-11 05:22:30.515949 | orchestrator | 2026-04-11 05:22:30.515955 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:22:30.515961 | orchestrator | Saturday 11 April 2026 05:22:13 +0000 (0:00:01.245) 0:12:09.361 ******** 2026-04-11 05:22:30.515967 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.515977 | orchestrator | 2026-04-11 05:22:30.515983 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:22:30.515989 | orchestrator | Saturday 11 April 2026 05:22:14 +0000 (0:00:01.121) 0:12:10.483 ******** 2026-04-11 05:22:30.515995 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516000 | orchestrator | 2026-04-11 05:22:30.516006 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:22:30.516012 | orchestrator | Saturday 11 April 2026 05:22:15 +0000 (0:00:01.135) 0:12:11.618 ******** 2026-04-11 05:22:30.516019 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:22:30.516026 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:22:30.516098 | orchestrator | 2026-04-11 05:22:30.516106 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:22:30.516111 | orchestrator | Saturday 11 April 2026 05:22:17 +0000 (0:00:01.818) 0:12:13.437 ******** 2026-04-11 05:22:30.516116 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:22:30.516120 | orchestrator | 2026-04-11 05:22:30.516124 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:22:30.516129 | orchestrator | Saturday 11 April 2026 05:22:18 +0000 (0:00:01.502) 0:12:14.940 ******** 2026-04-11 05:22:30.516133 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516137 | orchestrator | 2026-04-11 05:22:30.516142 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:22:30.516146 | orchestrator | Saturday 11 April 2026 05:22:19 +0000 (0:00:01.130) 0:12:16.070 ******** 2026-04-11 05:22:30.516150 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516155 | orchestrator | 2026-04-11 05:22:30.516159 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:22:30.516163 | orchestrator | Saturday 11 April 2026 05:22:20 +0000 (0:00:00.870) 0:12:16.940 ******** 2026-04-11 05:22:30.516168 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516172 | orchestrator | 2026-04-11 05:22:30.516181 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:22:30.516187 | orchestrator | Saturday 11 April 2026 05:22:21 +0000 (0:00:00.822) 0:12:17.763 ******** 2026-04-11 05:22:30.516193 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-11 05:22:30.516198 | orchestrator | 2026-04-11 05:22:30.516202 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:22:30.516206 | orchestrator | Saturday 11 April 2026 05:22:22 +0000 (0:00:01.139) 0:12:18.902 ******** 2026-04-11 05:22:30.516211 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:22:30.516215 | orchestrator | 2026-04-11 05:22:30.516219 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:22:30.516224 | orchestrator | Saturday 11 April 2026 05:22:24 +0000 (0:00:01.788) 0:12:20.691 ******** 2026-04-11 05:22:30.516229 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:22:30.516235 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:22:30.516242 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:22:30.516248 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516255 | orchestrator | 2026-04-11 05:22:30.516261 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:22:30.516268 | orchestrator | Saturday 11 April 2026 05:22:25 +0000 (0:00:01.230) 0:12:21.922 ******** 2026-04-11 05:22:30.516276 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516281 | orchestrator | 2026-04-11 05:22:30.516285 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:22:30.516290 | orchestrator | Saturday 11 April 2026 05:22:26 +0000 (0:00:01.133) 0:12:23.056 ******** 2026-04-11 05:22:30.516294 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516299 | orchestrator | 2026-04-11 05:22:30.516308 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:22:30.516312 | orchestrator | Saturday 11 April 2026 05:22:28 +0000 (0:00:01.210) 0:12:24.267 ******** 2026-04-11 05:22:30.516316 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516321 | orchestrator | 2026-04-11 05:22:30.516325 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:22:30.516329 | orchestrator | Saturday 11 April 2026 05:22:29 +0000 (0:00:01.150) 0:12:25.418 ******** 2026-04-11 05:22:30.516333 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516338 | orchestrator | 2026-04-11 05:22:30.516342 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:22:30.516347 | orchestrator | Saturday 11 April 2026 05:22:30 +0000 (0:00:01.112) 0:12:26.530 ******** 2026-04-11 05:22:30.516351 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:22:30.516355 | orchestrator | 2026-04-11 05:22:30.516366 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:23:09.297019 | orchestrator | Saturday 11 April 2026 05:22:31 +0000 (0:00:00.846) 0:12:27.377 ******** 2026-04-11 05:23:09.297195 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:23:09.297213 | orchestrator | 2026-04-11 05:23:09.297227 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:23:09.297240 | orchestrator | Saturday 11 April 2026 05:22:33 +0000 (0:00:02.214) 0:12:29.591 ******** 2026-04-11 05:23:09.297251 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:23:09.297262 | orchestrator | 2026-04-11 05:23:09.297273 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:23:09.297284 | orchestrator | Saturday 11 April 2026 05:22:34 +0000 (0:00:00.806) 0:12:30.398 ******** 2026-04-11 05:23:09.297296 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-11 05:23:09.297308 | orchestrator | 2026-04-11 05:23:09.297319 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:23:09.297329 | orchestrator | Saturday 11 April 2026 05:22:35 +0000 (0:00:01.128) 0:12:31.527 ******** 2026-04-11 05:23:09.297340 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.297353 | orchestrator | 2026-04-11 05:23:09.297364 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:23:09.297375 | orchestrator | Saturday 11 April 2026 05:22:36 +0000 (0:00:01.122) 0:12:32.649 ******** 2026-04-11 05:23:09.297386 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.297397 | orchestrator | 2026-04-11 05:23:09.297407 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:23:09.297418 | orchestrator | Saturday 11 April 2026 05:22:37 +0000 (0:00:01.132) 0:12:33.781 ******** 2026-04-11 05:23:09.297429 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.297440 | orchestrator | 2026-04-11 05:23:09.297450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:23:09.297461 | orchestrator | Saturday 11 April 2026 05:22:38 +0000 (0:00:01.169) 0:12:34.951 ******** 2026-04-11 05:23:09.297472 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.297483 | orchestrator | 2026-04-11 05:23:09.297494 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:23:09.297505 | orchestrator | Saturday 11 April 2026 05:22:39 +0000 (0:00:01.136) 0:12:36.087 ******** 2026-04-11 05:23:09.297516 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.297529 | orchestrator | 2026-04-11 05:23:09.297542 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:23:09.297555 | orchestrator | Saturday 11 April 2026 05:22:41 +0000 (0:00:01.235) 0:12:37.323 ******** 2026-04-11 05:23:09.297568 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.297580 | orchestrator | 2026-04-11 05:23:09.297593 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:23:09.297606 | orchestrator | Saturday 11 April 2026 05:22:42 +0000 (0:00:01.143) 0:12:38.466 ******** 2026-04-11 05:23:09.297619 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.297662 | orchestrator | 2026-04-11 05:23:09.297675 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:23:09.297688 | orchestrator | Saturday 11 April 2026 05:22:43 +0000 (0:00:01.120) 0:12:39.587 ******** 2026-04-11 05:23:09.297717 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.297731 | orchestrator | 2026-04-11 05:23:09.297744 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:23:09.297756 | orchestrator | Saturday 11 April 2026 05:22:44 +0000 (0:00:01.122) 0:12:40.710 ******** 2026-04-11 05:23:09.297769 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:23:09.297781 | orchestrator | 2026-04-11 05:23:09.297794 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:23:09.297806 | orchestrator | Saturday 11 April 2026 05:22:45 +0000 (0:00:00.777) 0:12:41.488 ******** 2026-04-11 05:23:09.297819 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-11 05:23:09.297848 | orchestrator | 2026-04-11 05:23:09.297872 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:23:09.297884 | orchestrator | Saturday 11 April 2026 05:22:46 +0000 (0:00:01.110) 0:12:42.598 ******** 2026-04-11 05:23:09.297895 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-11 05:23:09.297906 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-11 05:23:09.297917 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-11 05:23:09.297928 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-11 05:23:09.297939 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-11 05:23:09.297949 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-11 05:23:09.297960 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-11 05:23:09.297971 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:23:09.297982 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:23:09.297993 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:23:09.298004 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:23:09.298117 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:23:09.298133 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:23:09.298144 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:23:09.298155 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-11 05:23:09.298166 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-11 05:23:09.298176 | orchestrator | 2026-04-11 05:23:09.298187 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:23:09.298197 | orchestrator | Saturday 11 April 2026 05:22:52 +0000 (0:00:06.566) 0:12:49.165 ******** 2026-04-11 05:23:09.298208 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298219 | orchestrator | 2026-04-11 05:23:09.298230 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:23:09.298259 | orchestrator | Saturday 11 April 2026 05:22:53 +0000 (0:00:00.787) 0:12:49.953 ******** 2026-04-11 05:23:09.298270 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298281 | orchestrator | 2026-04-11 05:23:09.298292 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:23:09.298303 | orchestrator | Saturday 11 April 2026 05:22:54 +0000 (0:00:00.792) 0:12:50.746 ******** 2026-04-11 05:23:09.298313 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298324 | orchestrator | 2026-04-11 05:23:09.298335 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:23:09.298345 | orchestrator | Saturday 11 April 2026 05:22:55 +0000 (0:00:00.866) 0:12:51.612 ******** 2026-04-11 05:23:09.298356 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298367 | orchestrator | 2026-04-11 05:23:09.298388 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:23:09.298399 | orchestrator | Saturday 11 April 2026 05:22:56 +0000 (0:00:00.782) 0:12:52.395 ******** 2026-04-11 05:23:09.298409 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298420 | orchestrator | 2026-04-11 05:23:09.298431 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:23:09.298441 | orchestrator | Saturday 11 April 2026 05:22:56 +0000 (0:00:00.800) 0:12:53.195 ******** 2026-04-11 05:23:09.298452 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298463 | orchestrator | 2026-04-11 05:23:09.298473 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:23:09.298484 | orchestrator | Saturday 11 April 2026 05:22:57 +0000 (0:00:00.774) 0:12:53.970 ******** 2026-04-11 05:23:09.298495 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298506 | orchestrator | 2026-04-11 05:23:09.298517 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:23:09.298527 | orchestrator | Saturday 11 April 2026 05:22:58 +0000 (0:00:00.802) 0:12:54.772 ******** 2026-04-11 05:23:09.298538 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298548 | orchestrator | 2026-04-11 05:23:09.298559 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:23:09.298570 | orchestrator | Saturday 11 April 2026 05:22:59 +0000 (0:00:00.794) 0:12:55.566 ******** 2026-04-11 05:23:09.298580 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298591 | orchestrator | 2026-04-11 05:23:09.298602 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:23:09.298613 | orchestrator | Saturday 11 April 2026 05:23:00 +0000 (0:00:00.782) 0:12:56.349 ******** 2026-04-11 05:23:09.298623 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298634 | orchestrator | 2026-04-11 05:23:09.298645 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:23:09.298655 | orchestrator | Saturday 11 April 2026 05:23:00 +0000 (0:00:00.786) 0:12:57.135 ******** 2026-04-11 05:23:09.298666 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298677 | orchestrator | 2026-04-11 05:23:09.298688 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:23:09.298705 | orchestrator | Saturday 11 April 2026 05:23:01 +0000 (0:00:00.797) 0:12:57.932 ******** 2026-04-11 05:23:09.298716 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298727 | orchestrator | 2026-04-11 05:23:09.298738 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:23:09.298748 | orchestrator | Saturday 11 April 2026 05:23:02 +0000 (0:00:00.803) 0:12:58.736 ******** 2026-04-11 05:23:09.298759 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298769 | orchestrator | 2026-04-11 05:23:09.298780 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:23:09.298791 | orchestrator | Saturday 11 April 2026 05:23:03 +0000 (0:00:00.915) 0:12:59.652 ******** 2026-04-11 05:23:09.298802 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298812 | orchestrator | 2026-04-11 05:23:09.298823 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:23:09.298833 | orchestrator | Saturday 11 April 2026 05:23:04 +0000 (0:00:00.805) 0:13:00.458 ******** 2026-04-11 05:23:09.298844 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298855 | orchestrator | 2026-04-11 05:23:09.298866 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:23:09.298876 | orchestrator | Saturday 11 April 2026 05:23:05 +0000 (0:00:00.906) 0:13:01.365 ******** 2026-04-11 05:23:09.298887 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298898 | orchestrator | 2026-04-11 05:23:09.298908 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:23:09.298919 | orchestrator | Saturday 11 April 2026 05:23:05 +0000 (0:00:00.806) 0:13:02.172 ******** 2026-04-11 05:23:09.298937 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298947 | orchestrator | 2026-04-11 05:23:09.298958 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:23:09.298971 | orchestrator | Saturday 11 April 2026 05:23:06 +0000 (0:00:00.757) 0:13:02.930 ******** 2026-04-11 05:23:09.298982 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.298992 | orchestrator | 2026-04-11 05:23:09.299003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:23:09.299014 | orchestrator | Saturday 11 April 2026 05:23:07 +0000 (0:00:00.772) 0:13:03.703 ******** 2026-04-11 05:23:09.299024 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.299035 | orchestrator | 2026-04-11 05:23:09.299065 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:23:09.299076 | orchestrator | Saturday 11 April 2026 05:23:08 +0000 (0:00:00.831) 0:13:04.535 ******** 2026-04-11 05:23:09.299087 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.299098 | orchestrator | 2026-04-11 05:23:09.299109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:23:09.299120 | orchestrator | Saturday 11 April 2026 05:23:09 +0000 (0:00:00.813) 0:13:05.348 ******** 2026-04-11 05:23:09.299131 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:23:09.299142 | orchestrator | 2026-04-11 05:23:09.299161 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:24:26.134748 | orchestrator | Saturday 11 April 2026 05:23:09 +0000 (0:00:00.797) 0:13:06.145 ******** 2026-04-11 05:24:26.134906 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:24:26.134922 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:24:26.134934 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:24:26.134946 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.134958 | orchestrator | 2026-04-11 05:24:26.134970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:24:26.134982 | orchestrator | Saturday 11 April 2026 05:23:11 +0000 (0:00:01.083) 0:13:07.229 ******** 2026-04-11 05:24:26.134993 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:24:26.135004 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:24:26.135015 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:24:26.135026 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.135037 | orchestrator | 2026-04-11 05:24:26.135048 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:24:26.135117 | orchestrator | Saturday 11 April 2026 05:23:12 +0000 (0:00:01.065) 0:13:08.295 ******** 2026-04-11 05:24:26.135130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:24:26.135141 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:24:26.135152 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:24:26.135163 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.135174 | orchestrator | 2026-04-11 05:24:26.135185 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:24:26.135196 | orchestrator | Saturday 11 April 2026 05:23:13 +0000 (0:00:01.047) 0:13:09.343 ******** 2026-04-11 05:24:26.135207 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.135219 | orchestrator | 2026-04-11 05:24:26.135231 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:24:26.135244 | orchestrator | Saturday 11 April 2026 05:23:13 +0000 (0:00:00.772) 0:13:10.115 ******** 2026-04-11 05:24:26.135258 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-11 05:24:26.135271 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.135283 | orchestrator | 2026-04-11 05:24:26.135296 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:24:26.135309 | orchestrator | Saturday 11 April 2026 05:23:14 +0000 (0:00:00.930) 0:13:11.046 ******** 2026-04-11 05:24:26.135353 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:24:26.135367 | orchestrator | 2026-04-11 05:24:26.135379 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:24:26.135393 | orchestrator | Saturday 11 April 2026 05:23:16 +0000 (0:00:01.544) 0:13:12.591 ******** 2026-04-11 05:24:26.135403 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.135415 | orchestrator | 2026-04-11 05:24:26.135426 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-11 05:24:26.135455 | orchestrator | Saturday 11 April 2026 05:23:17 +0000 (0:00:00.806) 0:13:13.397 ******** 2026-04-11 05:24:26.135466 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-04-11 05:24:26.135478 | orchestrator | 2026-04-11 05:24:26.135489 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-11 05:24:26.135500 | orchestrator | Saturday 11 April 2026 05:23:18 +0000 (0:00:01.119) 0:13:14.517 ******** 2026-04-11 05:24:26.135511 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:24:26.135521 | orchestrator | 2026-04-11 05:24:26.135532 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-11 05:24:26.135543 | orchestrator | Saturday 11 April 2026 05:23:21 +0000 (0:00:03.235) 0:13:17.753 ******** 2026-04-11 05:24:26.135554 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.135564 | orchestrator | 2026-04-11 05:24:26.135575 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-11 05:24:26.135586 | orchestrator | Saturday 11 April 2026 05:23:22 +0000 (0:00:01.115) 0:13:18.869 ******** 2026-04-11 05:24:26.135597 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.135608 | orchestrator | 2026-04-11 05:24:26.135618 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-11 05:24:26.135629 | orchestrator | Saturday 11 April 2026 05:23:23 +0000 (0:00:01.114) 0:13:19.983 ******** 2026-04-11 05:24:26.135640 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.135651 | orchestrator | 2026-04-11 05:24:26.135677 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-11 05:24:26.135699 | orchestrator | Saturday 11 April 2026 05:23:24 +0000 (0:00:01.201) 0:13:21.184 ******** 2026-04-11 05:24:26.135711 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:24:26.135721 | orchestrator | 2026-04-11 05:24:26.135732 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-11 05:24:26.135743 | orchestrator | Saturday 11 April 2026 05:23:27 +0000 (0:00:02.175) 0:13:23.360 ******** 2026-04-11 05:24:26.135754 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.135765 | orchestrator | 2026-04-11 05:24:26.135776 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-11 05:24:26.135786 | orchestrator | Saturday 11 April 2026 05:23:28 +0000 (0:00:01.612) 0:13:24.972 ******** 2026-04-11 05:24:26.135797 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.135808 | orchestrator | 2026-04-11 05:24:26.135819 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-11 05:24:26.135830 | orchestrator | Saturday 11 April 2026 05:23:30 +0000 (0:00:01.526) 0:13:26.499 ******** 2026-04-11 05:24:26.135840 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.135851 | orchestrator | 2026-04-11 05:24:26.135862 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-11 05:24:26.135872 | orchestrator | Saturday 11 April 2026 05:23:31 +0000 (0:00:01.536) 0:13:28.035 ******** 2026-04-11 05:24:26.135884 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:24:26.135895 | orchestrator | 2026-04-11 05:24:26.135927 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-11 05:24:26.135939 | orchestrator | Saturday 11 April 2026 05:23:33 +0000 (0:00:01.594) 0:13:29.630 ******** 2026-04-11 05:24:26.135950 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:24:26.135961 | orchestrator | 2026-04-11 05:24:26.135971 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-11 05:24:26.135992 | orchestrator | Saturday 11 April 2026 05:23:35 +0000 (0:00:01.645) 0:13:31.275 ******** 2026-04-11 05:24:26.136003 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:24:26.136015 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-11 05:24:26.136026 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 05:24:26.136037 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-11 05:24:26.136047 | orchestrator | 2026-04-11 05:24:26.136090 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-11 05:24:26.136102 | orchestrator | Saturday 11 April 2026 05:23:38 +0000 (0:00:03.919) 0:13:35.195 ******** 2026-04-11 05:24:26.136113 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:24:26.136123 | orchestrator | 2026-04-11 05:24:26.136134 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-11 05:24:26.136145 | orchestrator | Saturday 11 April 2026 05:23:41 +0000 (0:00:02.049) 0:13:37.244 ******** 2026-04-11 05:24:26.136156 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.136166 | orchestrator | 2026-04-11 05:24:26.136177 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-11 05:24:26.136188 | orchestrator | Saturday 11 April 2026 05:23:42 +0000 (0:00:01.119) 0:13:38.364 ******** 2026-04-11 05:24:26.136198 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.136209 | orchestrator | 2026-04-11 05:24:26.136220 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-11 05:24:26.136230 | orchestrator | Saturday 11 April 2026 05:23:43 +0000 (0:00:01.195) 0:13:39.559 ******** 2026-04-11 05:24:26.136241 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.136252 | orchestrator | 2026-04-11 05:24:26.136263 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-11 05:24:26.136273 | orchestrator | Saturday 11 April 2026 05:23:45 +0000 (0:00:01.757) 0:13:41.317 ******** 2026-04-11 05:24:26.136284 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.136294 | orchestrator | 2026-04-11 05:24:26.136305 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-11 05:24:26.136316 | orchestrator | Saturday 11 April 2026 05:23:46 +0000 (0:00:01.586) 0:13:42.903 ******** 2026-04-11 05:24:26.136327 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.136337 | orchestrator | 2026-04-11 05:24:26.136348 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-11 05:24:26.136359 | orchestrator | Saturday 11 April 2026 05:23:47 +0000 (0:00:00.791) 0:13:43.695 ******** 2026-04-11 05:24:26.136370 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-04-11 05:24:26.136380 | orchestrator | 2026-04-11 05:24:26.136397 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-11 05:24:26.136408 | orchestrator | Saturday 11 April 2026 05:23:48 +0000 (0:00:01.147) 0:13:44.842 ******** 2026-04-11 05:24:26.136419 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.136429 | orchestrator | 2026-04-11 05:24:26.136440 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-11 05:24:26.136451 | orchestrator | Saturday 11 April 2026 05:23:49 +0000 (0:00:01.137) 0:13:45.979 ******** 2026-04-11 05:24:26.136462 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:24:26.136472 | orchestrator | 2026-04-11 05:24:26.136483 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-11 05:24:26.136494 | orchestrator | Saturday 11 April 2026 05:23:50 +0000 (0:00:01.188) 0:13:47.168 ******** 2026-04-11 05:24:26.136505 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-04-11 05:24:26.136515 | orchestrator | 2026-04-11 05:24:26.136526 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-11 05:24:26.136537 | orchestrator | Saturday 11 April 2026 05:23:52 +0000 (0:00:01.122) 0:13:48.290 ******** 2026-04-11 05:24:26.136547 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.136566 | orchestrator | 2026-04-11 05:24:26.136577 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-11 05:24:26.136587 | orchestrator | Saturday 11 April 2026 05:23:54 +0000 (0:00:02.347) 0:13:50.637 ******** 2026-04-11 05:24:26.136598 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.136608 | orchestrator | 2026-04-11 05:24:26.136619 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-11 05:24:26.136630 | orchestrator | Saturday 11 April 2026 05:23:56 +0000 (0:00:02.063) 0:13:52.701 ******** 2026-04-11 05:24:26.136640 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.136651 | orchestrator | 2026-04-11 05:24:26.136662 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-11 05:24:26.136673 | orchestrator | Saturday 11 April 2026 05:23:58 +0000 (0:00:02.482) 0:13:55.183 ******** 2026-04-11 05:24:26.136683 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:24:26.136694 | orchestrator | 2026-04-11 05:24:26.136705 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-11 05:24:26.136716 | orchestrator | Saturday 11 April 2026 05:24:01 +0000 (0:00:02.982) 0:13:58.165 ******** 2026-04-11 05:24:26.136726 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-04-11 05:24:26.136737 | orchestrator | 2026-04-11 05:24:26.136748 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-11 05:24:26.136759 | orchestrator | Saturday 11 April 2026 05:24:03 +0000 (0:00:01.104) 0:13:59.270 ******** 2026-04-11 05:24:26.136769 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-11 05:24:26.136781 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:24:26.136791 | orchestrator | 2026-04-11 05:24:26.136802 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-11 05:24:26.136820 | orchestrator | Saturday 11 April 2026 05:24:26 +0000 (0:00:23.065) 0:14:22.336 ******** 2026-04-11 05:25:06.810607 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:25:06.810708 | orchestrator | 2026-04-11 05:25:06.810724 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-11 05:25:06.810737 | orchestrator | Saturday 11 April 2026 05:24:28 +0000 (0:00:02.631) 0:14:24.967 ******** 2026-04-11 05:25:06.810748 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.810760 | orchestrator | 2026-04-11 05:25:06.810771 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-11 05:25:06.810782 | orchestrator | Saturday 11 April 2026 05:24:29 +0000 (0:00:00.821) 0:14:25.789 ******** 2026-04-11 05:25:06.810795 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-11 05:25:06.810809 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-11 05:25:06.810820 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-11 05:25:06.810831 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-11 05:25:06.810877 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-11 05:25:06.810890 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}])  2026-04-11 05:25:06.810904 | orchestrator | 2026-04-11 05:25:06.810915 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-11 05:25:06.810927 | orchestrator | Saturday 11 April 2026 05:24:39 +0000 (0:00:09.519) 0:14:35.309 ******** 2026-04-11 05:25:06.810939 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:25:06.810950 | orchestrator | 2026-04-11 05:25:06.810962 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:25:06.810974 | orchestrator | Saturday 11 April 2026 05:24:41 +0000 (0:00:02.215) 0:14:37.525 ******** 2026-04-11 05:25:06.810985 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:25:06.810997 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-11 05:25:06.811009 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-11 05:25:06.811020 | orchestrator | 2026-04-11 05:25:06.811032 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:25:06.811043 | orchestrator | Saturday 11 April 2026 05:24:43 +0000 (0:00:01.818) 0:14:39.344 ******** 2026-04-11 05:25:06.811055 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 05:25:06.811112 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 05:25:06.811126 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 05:25:06.811137 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811150 | orchestrator | 2026-04-11 05:25:06.811162 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-11 05:25:06.811175 | orchestrator | Saturday 11 April 2026 05:24:44 +0000 (0:00:01.076) 0:14:40.420 ******** 2026-04-11 05:25:06.811188 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811200 | orchestrator | 2026-04-11 05:25:06.811214 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-11 05:25:06.811243 | orchestrator | Saturday 11 April 2026 05:24:45 +0000 (0:00:00.799) 0:14:41.219 ******** 2026-04-11 05:25:06.811256 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:25:06.811269 | orchestrator | 2026-04-11 05:25:06.811282 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 05:25:06.811294 | orchestrator | Saturday 11 April 2026 05:24:47 +0000 (0:00:02.381) 0:14:43.601 ******** 2026-04-11 05:25:06.811307 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811320 | orchestrator | 2026-04-11 05:25:06.811332 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 05:25:06.811357 | orchestrator | Saturday 11 April 2026 05:24:48 +0000 (0:00:00.780) 0:14:44.382 ******** 2026-04-11 05:25:06.811370 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811393 | orchestrator | 2026-04-11 05:25:06.811406 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 05:25:06.811419 | orchestrator | Saturday 11 April 2026 05:24:48 +0000 (0:00:00.791) 0:14:45.174 ******** 2026-04-11 05:25:06.811431 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811453 | orchestrator | 2026-04-11 05:25:06.811466 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 05:25:06.811479 | orchestrator | Saturday 11 April 2026 05:24:49 +0000 (0:00:00.799) 0:14:45.974 ******** 2026-04-11 05:25:06.811492 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811503 | orchestrator | 2026-04-11 05:25:06.811513 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 05:25:06.811524 | orchestrator | Saturday 11 April 2026 05:24:50 +0000 (0:00:00.818) 0:14:46.792 ******** 2026-04-11 05:25:06.811535 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811546 | orchestrator | 2026-04-11 05:25:06.811556 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-11 05:25:06.811567 | orchestrator | Saturday 11 April 2026 05:24:51 +0000 (0:00:00.793) 0:14:47.586 ******** 2026-04-11 05:25:06.811578 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811589 | orchestrator | 2026-04-11 05:25:06.811599 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 05:25:06.811610 | orchestrator | Saturday 11 April 2026 05:24:52 +0000 (0:00:00.765) 0:14:48.351 ******** 2026-04-11 05:25:06.811621 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:25:06.811632 | orchestrator | 2026-04-11 05:25:06.811642 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-11 05:25:06.811653 | orchestrator | 2026-04-11 05:25:06.811664 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-11 05:25:06.811674 | orchestrator | Saturday 11 April 2026 05:24:53 +0000 (0:00:01.027) 0:14:49.379 ******** 2026-04-11 05:25:06.811685 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.811696 | orchestrator | 2026-04-11 05:25:06.811707 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-11 05:25:06.811718 | orchestrator | Saturday 11 April 2026 05:24:54 +0000 (0:00:01.193) 0:14:50.572 ******** 2026-04-11 05:25:06.811729 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.811739 | orchestrator | 2026-04-11 05:25:06.811750 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-11 05:25:06.811766 | orchestrator | Saturday 11 April 2026 05:24:55 +0000 (0:00:00.825) 0:14:51.398 ******** 2026-04-11 05:25:06.811778 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:06.811789 | orchestrator | 2026-04-11 05:25:06.811799 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-11 05:25:06.811810 | orchestrator | Saturday 11 April 2026 05:24:55 +0000 (0:00:00.786) 0:14:52.184 ******** 2026-04-11 05:25:06.811821 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.811832 | orchestrator | 2026-04-11 05:25:06.811843 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:25:06.811853 | orchestrator | Saturday 11 April 2026 05:24:56 +0000 (0:00:00.827) 0:14:53.012 ******** 2026-04-11 05:25:06.811864 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-11 05:25:06.811875 | orchestrator | 2026-04-11 05:25:06.811885 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:25:06.811896 | orchestrator | Saturday 11 April 2026 05:24:57 +0000 (0:00:01.139) 0:14:54.152 ******** 2026-04-11 05:25:06.811907 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.811918 | orchestrator | 2026-04-11 05:25:06.811928 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:25:06.811939 | orchestrator | Saturday 11 April 2026 05:24:59 +0000 (0:00:01.505) 0:14:55.657 ******** 2026-04-11 05:25:06.811950 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.811960 | orchestrator | 2026-04-11 05:25:06.811971 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:25:06.811981 | orchestrator | Saturday 11 April 2026 05:25:00 +0000 (0:00:01.192) 0:14:56.850 ******** 2026-04-11 05:25:06.811992 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.812003 | orchestrator | 2026-04-11 05:25:06.812013 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:25:06.812030 | orchestrator | Saturday 11 April 2026 05:25:02 +0000 (0:00:01.488) 0:14:58.338 ******** 2026-04-11 05:25:06.812041 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.812051 | orchestrator | 2026-04-11 05:25:06.812062 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:25:06.812102 | orchestrator | Saturday 11 April 2026 05:25:03 +0000 (0:00:01.159) 0:14:59.497 ******** 2026-04-11 05:25:06.812121 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.812141 | orchestrator | 2026-04-11 05:25:06.812160 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:25:06.812172 | orchestrator | Saturday 11 April 2026 05:25:04 +0000 (0:00:01.185) 0:15:00.683 ******** 2026-04-11 05:25:06.812183 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:06.812193 | orchestrator | 2026-04-11 05:25:06.812204 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:25:06.812215 | orchestrator | Saturday 11 April 2026 05:25:05 +0000 (0:00:01.166) 0:15:01.850 ******** 2026-04-11 05:25:06.812226 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:06.812237 | orchestrator | 2026-04-11 05:25:06.812247 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:25:06.812265 | orchestrator | Saturday 11 April 2026 05:25:06 +0000 (0:00:01.164) 0:15:03.014 ******** 2026-04-11 05:25:31.095942 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:31.096060 | orchestrator | 2026-04-11 05:25:31.096137 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:25:31.096153 | orchestrator | Saturday 11 April 2026 05:25:07 +0000 (0:00:01.108) 0:15:04.123 ******** 2026-04-11 05:25:31.096165 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:25:31.096177 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:25:31.096189 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:25:31.096200 | orchestrator | 2026-04-11 05:25:31.096212 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:25:31.096223 | orchestrator | Saturday 11 April 2026 05:25:09 +0000 (0:00:01.863) 0:15:05.987 ******** 2026-04-11 05:25:31.096234 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:31.096256 | orchestrator | 2026-04-11 05:25:31.096268 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:25:31.096279 | orchestrator | Saturday 11 April 2026 05:25:10 +0000 (0:00:01.181) 0:15:07.169 ******** 2026-04-11 05:25:31.096290 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:25:31.096301 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:25:31.096311 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:25:31.096322 | orchestrator | 2026-04-11 05:25:31.096333 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:25:31.096344 | orchestrator | Saturday 11 April 2026 05:25:13 +0000 (0:00:02.983) 0:15:10.152 ******** 2026-04-11 05:25:31.096356 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 05:25:31.096367 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 05:25:31.096378 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 05:25:31.096389 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:31.096400 | orchestrator | 2026-04-11 05:25:31.096411 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:25:31.096422 | orchestrator | Saturday 11 April 2026 05:25:15 +0000 (0:00:01.506) 0:15:11.659 ******** 2026-04-11 05:25:31.096435 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:25:31.096465 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:25:31.096498 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:25:31.096511 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:31.096522 | orchestrator | 2026-04-11 05:25:31.096533 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:25:31.096544 | orchestrator | Saturday 11 April 2026 05:25:17 +0000 (0:00:01.627) 0:15:13.287 ******** 2026-04-11 05:25:31.096557 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:31.096573 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:31.096593 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:31.096612 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:31.096630 | orchestrator | 2026-04-11 05:25:31.096649 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:25:31.096669 | orchestrator | Saturday 11 April 2026 05:25:18 +0000 (0:00:01.142) 0:15:14.430 ******** 2026-04-11 05:25:31.096715 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:25:11.679545', 'end': '2026-04-11 05:25:11.724169', 'delta': '0:00:00.044624', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:25:31.096737 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:25:12.196239', 'end': '2026-04-11 05:25:12.245108', 'delta': '0:00:00.048869', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:25:31.096756 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f023dde40a6c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:25:12.705827', 'end': '2026-04-11 05:25:12.747359', 'delta': '0:00:00.041532', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f023dde40a6c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:25:31.096778 | orchestrator | 2026-04-11 05:25:31.096790 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:25:31.096801 | orchestrator | Saturday 11 April 2026 05:25:19 +0000 (0:00:01.199) 0:15:15.630 ******** 2026-04-11 05:25:31.096812 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:31.096823 | orchestrator | 2026-04-11 05:25:31.096834 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:25:31.096845 | orchestrator | Saturday 11 April 2026 05:25:20 +0000 (0:00:01.256) 0:15:16.886 ******** 2026-04-11 05:25:31.096856 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:31.096867 | orchestrator | 2026-04-11 05:25:31.096877 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:25:31.096888 | orchestrator | Saturday 11 April 2026 05:25:21 +0000 (0:00:01.238) 0:15:18.125 ******** 2026-04-11 05:25:31.096899 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:31.096910 | orchestrator | 2026-04-11 05:25:31.096921 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:25:31.096931 | orchestrator | Saturday 11 April 2026 05:25:23 +0000 (0:00:01.162) 0:15:19.287 ******** 2026-04-11 05:25:31.096942 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] 2026-04-11 05:25:31.096953 | orchestrator | 2026-04-11 05:25:31.096963 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:25:31.096974 | orchestrator | Saturday 11 April 2026 05:25:25 +0000 (0:00:02.026) 0:15:21.314 ******** 2026-04-11 05:25:31.096985 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:31.096995 | orchestrator | 2026-04-11 05:25:31.097006 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:25:31.097017 | orchestrator | Saturday 11 April 2026 05:25:26 +0000 (0:00:01.214) 0:15:22.529 ******** 2026-04-11 05:25:31.097027 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:31.097038 | orchestrator | 2026-04-11 05:25:31.097049 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:25:31.097059 | orchestrator | Saturday 11 April 2026 05:25:27 +0000 (0:00:01.158) 0:15:23.688 ******** 2026-04-11 05:25:31.097070 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:31.097106 | orchestrator | 2026-04-11 05:25:31.097118 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:25:31.097129 | orchestrator | Saturday 11 April 2026 05:25:28 +0000 (0:00:01.286) 0:15:24.975 ******** 2026-04-11 05:25:31.097140 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:31.097151 | orchestrator | 2026-04-11 05:25:31.097162 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:25:31.097173 | orchestrator | Saturday 11 April 2026 05:25:29 +0000 (0:00:01.161) 0:15:26.136 ******** 2026-04-11 05:25:31.097183 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:31.097194 | orchestrator | 2026-04-11 05:25:31.097206 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:25:31.097224 | orchestrator | Saturday 11 April 2026 05:25:31 +0000 (0:00:01.159) 0:15:27.296 ******** 2026-04-11 05:25:38.204116 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:38.204233 | orchestrator | 2026-04-11 05:25:38.204252 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:25:38.204266 | orchestrator | Saturday 11 April 2026 05:25:32 +0000 (0:00:01.146) 0:15:28.442 ******** 2026-04-11 05:25:38.204304 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:38.204316 | orchestrator | 2026-04-11 05:25:38.204328 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:25:38.204339 | orchestrator | Saturday 11 April 2026 05:25:33 +0000 (0:00:01.182) 0:15:29.625 ******** 2026-04-11 05:25:38.204349 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:38.204360 | orchestrator | 2026-04-11 05:25:38.204371 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:25:38.204382 | orchestrator | Saturday 11 April 2026 05:25:34 +0000 (0:00:01.114) 0:15:30.739 ******** 2026-04-11 05:25:38.204393 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:38.204404 | orchestrator | 2026-04-11 05:25:38.204415 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:25:38.204427 | orchestrator | Saturday 11 April 2026 05:25:35 +0000 (0:00:01.158) 0:15:31.897 ******** 2026-04-11 05:25:38.204437 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:38.204448 | orchestrator | 2026-04-11 05:25:38.204459 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:25:38.204470 | orchestrator | Saturday 11 April 2026 05:25:36 +0000 (0:00:01.162) 0:15:33.060 ******** 2026-04-11 05:25:38.204483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:25:38.204498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:25:38.204525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:25:38.204539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:25:38.204553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:25:38.204565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:25:38.204603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:25:38.204628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e1b70df', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:25:38.204647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:25:38.204660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:25:38.204673 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:38.204685 | orchestrator | 2026-04-11 05:25:38.204698 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:25:38.204712 | orchestrator | Saturday 11 April 2026 05:25:38 +0000 (0:00:01.269) 0:15:34.329 ******** 2026-04-11 05:25:38.204726 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:38.204758 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972468 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972577 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972687 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972709 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972720 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972772 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e1b70df', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972793 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972804 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:25:46.972823 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:46.972835 | orchestrator | 2026-04-11 05:25:46.972846 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:25:46.972857 | orchestrator | Saturday 11 April 2026 05:25:39 +0000 (0:00:01.265) 0:15:35.595 ******** 2026-04-11 05:25:46.972867 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:46.972878 | orchestrator | 2026-04-11 05:25:46.972888 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:25:46.972897 | orchestrator | Saturday 11 April 2026 05:25:40 +0000 (0:00:01.500) 0:15:37.095 ******** 2026-04-11 05:25:46.972907 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:46.972917 | orchestrator | 2026-04-11 05:25:46.972927 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:25:46.972936 | orchestrator | Saturday 11 April 2026 05:25:42 +0000 (0:00:01.121) 0:15:38.217 ******** 2026-04-11 05:25:46.972946 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:25:46.972955 | orchestrator | 2026-04-11 05:25:46.972965 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:25:46.972974 | orchestrator | Saturday 11 April 2026 05:25:43 +0000 (0:00:01.486) 0:15:39.704 ******** 2026-04-11 05:25:46.972984 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:46.972993 | orchestrator | 2026-04-11 05:25:46.973005 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:25:46.973017 | orchestrator | Saturday 11 April 2026 05:25:44 +0000 (0:00:01.099) 0:15:40.803 ******** 2026-04-11 05:25:46.973028 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:46.973039 | orchestrator | 2026-04-11 05:25:46.973050 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:25:46.973061 | orchestrator | Saturday 11 April 2026 05:25:45 +0000 (0:00:01.257) 0:15:42.061 ******** 2026-04-11 05:25:46.973072 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:25:46.973119 | orchestrator | 2026-04-11 05:25:46.973131 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:25:46.973150 | orchestrator | Saturday 11 April 2026 05:25:46 +0000 (0:00:01.118) 0:15:43.180 ******** 2026-04-11 05:26:26.689437 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-11 05:26:26.689554 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-11 05:26:26.689570 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:26:26.689583 | orchestrator | 2026-04-11 05:26:26.689596 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:26:26.689608 | orchestrator | Saturday 11 April 2026 05:25:48 +0000 (0:00:02.004) 0:15:45.185 ******** 2026-04-11 05:26:26.689621 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 05:26:26.689632 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 05:26:26.689643 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 05:26:26.689654 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.689665 | orchestrator | 2026-04-11 05:26:26.689676 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:26:26.689687 | orchestrator | Saturday 11 April 2026 05:25:50 +0000 (0:00:01.143) 0:15:46.328 ******** 2026-04-11 05:26:26.689699 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.689710 | orchestrator | 2026-04-11 05:26:26.689721 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:26:26.689732 | orchestrator | Saturday 11 April 2026 05:25:51 +0000 (0:00:01.123) 0:15:47.452 ******** 2026-04-11 05:26:26.689743 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:26:26.689754 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:26:26.689765 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:26:26.689776 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:26:26.689811 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:26:26.689823 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:26:26.689833 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:26:26.689844 | orchestrator | 2026-04-11 05:26:26.689870 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:26:26.689881 | orchestrator | Saturday 11 April 2026 05:25:53 +0000 (0:00:01.905) 0:15:49.357 ******** 2026-04-11 05:26:26.689892 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:26:26.689902 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:26:26.689913 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:26:26.689924 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:26:26.689935 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:26:26.689945 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:26:26.689956 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:26:26.689967 | orchestrator | 2026-04-11 05:26:26.689977 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-11 05:26:26.689988 | orchestrator | Saturday 11 April 2026 05:25:55 +0000 (0:00:02.336) 0:15:51.694 ******** 2026-04-11 05:26:26.689999 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690010 | orchestrator | 2026-04-11 05:26:26.690078 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-11 05:26:26.690125 | orchestrator | Saturday 11 April 2026 05:25:56 +0000 (0:00:00.913) 0:15:52.607 ******** 2026-04-11 05:26:26.690136 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690147 | orchestrator | 2026-04-11 05:26:26.690158 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-11 05:26:26.690169 | orchestrator | Saturday 11 April 2026 05:25:57 +0000 (0:00:00.944) 0:15:53.552 ******** 2026-04-11 05:26:26.690179 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690190 | orchestrator | 2026-04-11 05:26:26.690201 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-11 05:26:26.690211 | orchestrator | Saturday 11 April 2026 05:25:58 +0000 (0:00:00.807) 0:15:54.360 ******** 2026-04-11 05:26:26.690222 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690232 | orchestrator | 2026-04-11 05:26:26.690243 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-11 05:26:26.690254 | orchestrator | Saturday 11 April 2026 05:25:59 +0000 (0:00:00.887) 0:15:55.248 ******** 2026-04-11 05:26:26.690265 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690275 | orchestrator | 2026-04-11 05:26:26.690286 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-11 05:26:26.690297 | orchestrator | Saturday 11 April 2026 05:25:59 +0000 (0:00:00.769) 0:15:56.017 ******** 2026-04-11 05:26:26.690307 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 05:26:26.690318 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 05:26:26.690329 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 05:26:26.690339 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690350 | orchestrator | 2026-04-11 05:26:26.690361 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-11 05:26:26.690372 | orchestrator | Saturday 11 April 2026 05:26:01 +0000 (0:00:01.437) 0:15:57.456 ******** 2026-04-11 05:26:26.690382 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-11 05:26:26.690393 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-11 05:26:26.690422 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-11 05:26:26.690444 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-11 05:26:26.690454 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-11 05:26:26.690465 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-11 05:26:26.690476 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690487 | orchestrator | 2026-04-11 05:26:26.690498 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-11 05:26:26.690508 | orchestrator | Saturday 11 April 2026 05:26:02 +0000 (0:00:01.679) 0:15:59.135 ******** 2026-04-11 05:26:26.690519 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:26:26.690530 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:26:26.690541 | orchestrator | 2026-04-11 05:26:26.690551 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-11 05:26:26.690562 | orchestrator | Saturday 11 April 2026 05:26:07 +0000 (0:00:04.276) 0:16:03.411 ******** 2026-04-11 05:26:26.690573 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:26:26.690584 | orchestrator | 2026-04-11 05:26:26.690594 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:26:26.690605 | orchestrator | Saturday 11 April 2026 05:26:09 +0000 (0:00:02.103) 0:16:05.515 ******** 2026-04-11 05:26:26.690616 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-11 05:26:26.690627 | orchestrator | 2026-04-11 05:26:26.690638 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:26:26.690649 | orchestrator | Saturday 11 April 2026 05:26:10 +0000 (0:00:01.182) 0:16:06.698 ******** 2026-04-11 05:26:26.690660 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-11 05:26:26.690670 | orchestrator | 2026-04-11 05:26:26.690681 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:26:26.690692 | orchestrator | Saturday 11 April 2026 05:26:11 +0000 (0:00:01.129) 0:16:07.828 ******** 2026-04-11 05:26:26.690703 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:26:26.690714 | orchestrator | 2026-04-11 05:26:26.690730 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:26:26.690741 | orchestrator | Saturday 11 April 2026 05:26:13 +0000 (0:00:01.522) 0:16:09.351 ******** 2026-04-11 05:26:26.690752 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690763 | orchestrator | 2026-04-11 05:26:26.690773 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:26:26.690784 | orchestrator | Saturday 11 April 2026 05:26:14 +0000 (0:00:01.108) 0:16:10.460 ******** 2026-04-11 05:26:26.690795 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690806 | orchestrator | 2026-04-11 05:26:26.690816 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:26:26.690827 | orchestrator | Saturday 11 April 2026 05:26:15 +0000 (0:00:01.128) 0:16:11.588 ******** 2026-04-11 05:26:26.690838 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690849 | orchestrator | 2026-04-11 05:26:26.690859 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:26:26.690870 | orchestrator | Saturday 11 April 2026 05:26:16 +0000 (0:00:01.120) 0:16:12.709 ******** 2026-04-11 05:26:26.690881 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:26:26.690892 | orchestrator | 2026-04-11 05:26:26.690902 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:26:26.690913 | orchestrator | Saturday 11 April 2026 05:26:18 +0000 (0:00:01.561) 0:16:14.270 ******** 2026-04-11 05:26:26.690924 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690934 | orchestrator | 2026-04-11 05:26:26.690945 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:26:26.690956 | orchestrator | Saturday 11 April 2026 05:26:19 +0000 (0:00:01.133) 0:16:15.404 ******** 2026-04-11 05:26:26.690974 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.690985 | orchestrator | 2026-04-11 05:26:26.690996 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:26:26.691006 | orchestrator | Saturday 11 April 2026 05:26:20 +0000 (0:00:01.134) 0:16:16.539 ******** 2026-04-11 05:26:26.691017 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:26:26.691028 | orchestrator | 2026-04-11 05:26:26.691039 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:26:26.691049 | orchestrator | Saturday 11 April 2026 05:26:21 +0000 (0:00:01.560) 0:16:18.100 ******** 2026-04-11 05:26:26.691060 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:26:26.691071 | orchestrator | 2026-04-11 05:26:26.691082 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:26:26.691127 | orchestrator | Saturday 11 April 2026 05:26:23 +0000 (0:00:01.512) 0:16:19.613 ******** 2026-04-11 05:26:26.691138 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.691149 | orchestrator | 2026-04-11 05:26:26.691160 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:26:26.691170 | orchestrator | Saturday 11 April 2026 05:26:24 +0000 (0:00:00.772) 0:16:20.385 ******** 2026-04-11 05:26:26.691181 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:26:26.691192 | orchestrator | 2026-04-11 05:26:26.691203 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:26:26.691214 | orchestrator | Saturday 11 April 2026 05:26:24 +0000 (0:00:00.818) 0:16:21.204 ******** 2026-04-11 05:26:26.691224 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.691235 | orchestrator | 2026-04-11 05:26:26.691246 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:26:26.691257 | orchestrator | Saturday 11 April 2026 05:26:25 +0000 (0:00:00.860) 0:16:22.064 ******** 2026-04-11 05:26:26.691267 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:26:26.691278 | orchestrator | 2026-04-11 05:26:26.691289 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:26:26.691300 | orchestrator | Saturday 11 April 2026 05:26:26 +0000 (0:00:00.777) 0:16:22.841 ******** 2026-04-11 05:26:26.691318 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.947989 | orchestrator | 2026-04-11 05:27:07.948169 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:27:07.948187 | orchestrator | Saturday 11 April 2026 05:26:27 +0000 (0:00:00.793) 0:16:23.635 ******** 2026-04-11 05:27:07.948199 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948212 | orchestrator | 2026-04-11 05:27:07.948225 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:27:07.948236 | orchestrator | Saturday 11 April 2026 05:26:28 +0000 (0:00:00.808) 0:16:24.443 ******** 2026-04-11 05:27:07.948247 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948258 | orchestrator | 2026-04-11 05:27:07.948269 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:27:07.948280 | orchestrator | Saturday 11 April 2026 05:26:29 +0000 (0:00:00.783) 0:16:25.226 ******** 2026-04-11 05:27:07.948292 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.948303 | orchestrator | 2026-04-11 05:27:07.948314 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:27:07.948326 | orchestrator | Saturday 11 April 2026 05:26:29 +0000 (0:00:00.822) 0:16:26.049 ******** 2026-04-11 05:27:07.948336 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.948348 | orchestrator | 2026-04-11 05:27:07.948358 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:27:07.948370 | orchestrator | Saturday 11 April 2026 05:26:30 +0000 (0:00:00.808) 0:16:26.857 ******** 2026-04-11 05:27:07.948380 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.948391 | orchestrator | 2026-04-11 05:27:07.948402 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:27:07.948414 | orchestrator | Saturday 11 April 2026 05:26:31 +0000 (0:00:00.798) 0:16:27.656 ******** 2026-04-11 05:27:07.948448 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948460 | orchestrator | 2026-04-11 05:27:07.948471 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:27:07.948482 | orchestrator | Saturday 11 April 2026 05:26:32 +0000 (0:00:00.838) 0:16:28.495 ******** 2026-04-11 05:27:07.948492 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948503 | orchestrator | 2026-04-11 05:27:07.948514 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:27:07.948525 | orchestrator | Saturday 11 April 2026 05:26:33 +0000 (0:00:00.782) 0:16:29.278 ******** 2026-04-11 05:27:07.948553 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948566 | orchestrator | 2026-04-11 05:27:07.948579 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:27:07.948592 | orchestrator | Saturday 11 April 2026 05:26:33 +0000 (0:00:00.743) 0:16:30.022 ******** 2026-04-11 05:27:07.948605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948617 | orchestrator | 2026-04-11 05:27:07.948631 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:27:07.948644 | orchestrator | Saturday 11 April 2026 05:26:34 +0000 (0:00:00.780) 0:16:30.802 ******** 2026-04-11 05:27:07.948657 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948670 | orchestrator | 2026-04-11 05:27:07.948682 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:27:07.948695 | orchestrator | Saturday 11 April 2026 05:26:35 +0000 (0:00:00.784) 0:16:31.587 ******** 2026-04-11 05:27:07.948707 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948720 | orchestrator | 2026-04-11 05:27:07.948733 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:27:07.948746 | orchestrator | Saturday 11 April 2026 05:26:36 +0000 (0:00:00.781) 0:16:32.368 ******** 2026-04-11 05:27:07.948759 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948772 | orchestrator | 2026-04-11 05:27:07.948784 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:27:07.948798 | orchestrator | Saturday 11 April 2026 05:26:36 +0000 (0:00:00.794) 0:16:33.163 ******** 2026-04-11 05:27:07.948811 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948823 | orchestrator | 2026-04-11 05:27:07.948835 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:27:07.948848 | orchestrator | Saturday 11 April 2026 05:26:37 +0000 (0:00:00.764) 0:16:33.928 ******** 2026-04-11 05:27:07.948861 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948874 | orchestrator | 2026-04-11 05:27:07.948886 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:27:07.948897 | orchestrator | Saturday 11 April 2026 05:26:38 +0000 (0:00:00.818) 0:16:34.746 ******** 2026-04-11 05:27:07.948908 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948919 | orchestrator | 2026-04-11 05:27:07.948930 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:27:07.948940 | orchestrator | Saturday 11 April 2026 05:26:39 +0000 (0:00:00.776) 0:16:35.523 ******** 2026-04-11 05:27:07.948951 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.948962 | orchestrator | 2026-04-11 05:27:07.948973 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:27:07.948984 | orchestrator | Saturday 11 April 2026 05:26:40 +0000 (0:00:00.822) 0:16:36.345 ******** 2026-04-11 05:27:07.948994 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949005 | orchestrator | 2026-04-11 05:27:07.949016 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:27:07.949027 | orchestrator | Saturday 11 April 2026 05:26:40 +0000 (0:00:00.791) 0:16:37.136 ******** 2026-04-11 05:27:07.949038 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.949049 | orchestrator | 2026-04-11 05:27:07.949059 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:27:07.949070 | orchestrator | Saturday 11 April 2026 05:26:42 +0000 (0:00:01.643) 0:16:38.780 ******** 2026-04-11 05:27:07.949088 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.949119 | orchestrator | 2026-04-11 05:27:07.949130 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:27:07.949141 | orchestrator | Saturday 11 April 2026 05:26:44 +0000 (0:00:02.170) 0:16:40.951 ******** 2026-04-11 05:27:07.949152 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-11 05:27:07.949164 | orchestrator | 2026-04-11 05:27:07.949193 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:27:07.949204 | orchestrator | Saturday 11 April 2026 05:26:45 +0000 (0:00:01.131) 0:16:42.083 ******** 2026-04-11 05:27:07.949216 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949227 | orchestrator | 2026-04-11 05:27:07.949238 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:27:07.949248 | orchestrator | Saturday 11 April 2026 05:26:46 +0000 (0:00:01.114) 0:16:43.197 ******** 2026-04-11 05:27:07.949259 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949270 | orchestrator | 2026-04-11 05:27:07.949281 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:27:07.949292 | orchestrator | Saturday 11 April 2026 05:26:48 +0000 (0:00:01.198) 0:16:44.396 ******** 2026-04-11 05:27:07.949303 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:27:07.949314 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:27:07.949325 | orchestrator | 2026-04-11 05:27:07.949336 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:27:07.949347 | orchestrator | Saturday 11 April 2026 05:26:50 +0000 (0:00:01.816) 0:16:46.213 ******** 2026-04-11 05:27:07.949358 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.949369 | orchestrator | 2026-04-11 05:27:07.949380 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:27:07.949391 | orchestrator | Saturday 11 April 2026 05:26:51 +0000 (0:00:01.479) 0:16:47.692 ******** 2026-04-11 05:27:07.949401 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949412 | orchestrator | 2026-04-11 05:27:07.949423 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:27:07.949434 | orchestrator | Saturday 11 April 2026 05:26:52 +0000 (0:00:01.128) 0:16:48.821 ******** 2026-04-11 05:27:07.949445 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949456 | orchestrator | 2026-04-11 05:27:07.949467 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:27:07.949478 | orchestrator | Saturday 11 April 2026 05:26:53 +0000 (0:00:00.760) 0:16:49.581 ******** 2026-04-11 05:27:07.949488 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949499 | orchestrator | 2026-04-11 05:27:07.949516 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:27:07.949527 | orchestrator | Saturday 11 April 2026 05:26:54 +0000 (0:00:00.771) 0:16:50.353 ******** 2026-04-11 05:27:07.949538 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-11 05:27:07.949549 | orchestrator | 2026-04-11 05:27:07.949560 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:27:07.949570 | orchestrator | Saturday 11 April 2026 05:26:55 +0000 (0:00:01.142) 0:16:51.495 ******** 2026-04-11 05:27:07.949581 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.949592 | orchestrator | 2026-04-11 05:27:07.949603 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:27:07.949614 | orchestrator | Saturday 11 April 2026 05:26:58 +0000 (0:00:02.837) 0:16:54.333 ******** 2026-04-11 05:27:07.949625 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:27:07.949636 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:27:07.949647 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:27:07.949665 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949676 | orchestrator | 2026-04-11 05:27:07.949687 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:27:07.949698 | orchestrator | Saturday 11 April 2026 05:26:59 +0000 (0:00:01.168) 0:16:55.502 ******** 2026-04-11 05:27:07.949709 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949720 | orchestrator | 2026-04-11 05:27:07.949731 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:27:07.949742 | orchestrator | Saturday 11 April 2026 05:27:00 +0000 (0:00:01.114) 0:16:56.617 ******** 2026-04-11 05:27:07.949753 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949764 | orchestrator | 2026-04-11 05:27:07.949774 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:27:07.949785 | orchestrator | Saturday 11 April 2026 05:27:01 +0000 (0:00:01.162) 0:16:57.779 ******** 2026-04-11 05:27:07.949796 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949807 | orchestrator | 2026-04-11 05:27:07.949818 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:27:07.949829 | orchestrator | Saturday 11 April 2026 05:27:02 +0000 (0:00:01.135) 0:16:58.914 ******** 2026-04-11 05:27:07.949840 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949850 | orchestrator | 2026-04-11 05:27:07.949861 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:27:07.949872 | orchestrator | Saturday 11 April 2026 05:27:03 +0000 (0:00:01.154) 0:17:00.069 ******** 2026-04-11 05:27:07.949883 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:07.949894 | orchestrator | 2026-04-11 05:27:07.949905 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:27:07.949916 | orchestrator | Saturday 11 April 2026 05:27:04 +0000 (0:00:00.815) 0:17:00.885 ******** 2026-04-11 05:27:07.949927 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.949937 | orchestrator | 2026-04-11 05:27:07.949948 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:27:07.949959 | orchestrator | Saturday 11 April 2026 05:27:06 +0000 (0:00:02.252) 0:17:03.138 ******** 2026-04-11 05:27:07.949970 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:07.949981 | orchestrator | 2026-04-11 05:27:07.949992 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:27:07.950003 | orchestrator | Saturday 11 April 2026 05:27:07 +0000 (0:00:00.768) 0:17:03.906 ******** 2026-04-11 05:27:07.950014 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-11 05:27:07.950114 | orchestrator | 2026-04-11 05:27:07.950147 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:27:45.025157 | orchestrator | Saturday 11 April 2026 05:27:08 +0000 (0:00:01.160) 0:17:05.067 ******** 2026-04-11 05:27:45.025282 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.025300 | orchestrator | 2026-04-11 05:27:45.025313 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:27:45.025325 | orchestrator | Saturday 11 April 2026 05:27:10 +0000 (0:00:01.155) 0:17:06.223 ******** 2026-04-11 05:27:45.025336 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.025347 | orchestrator | 2026-04-11 05:27:45.025358 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:27:45.025369 | orchestrator | Saturday 11 April 2026 05:27:11 +0000 (0:00:01.166) 0:17:07.389 ******** 2026-04-11 05:27:45.025380 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.025391 | orchestrator | 2026-04-11 05:27:45.025402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:27:45.025413 | orchestrator | Saturday 11 April 2026 05:27:12 +0000 (0:00:01.134) 0:17:08.524 ******** 2026-04-11 05:27:45.025424 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.025434 | orchestrator | 2026-04-11 05:27:45.025446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:27:45.025480 | orchestrator | Saturday 11 April 2026 05:27:13 +0000 (0:00:01.169) 0:17:09.694 ******** 2026-04-11 05:27:45.025492 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.025503 | orchestrator | 2026-04-11 05:27:45.025514 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:27:45.025525 | orchestrator | Saturday 11 April 2026 05:27:14 +0000 (0:00:01.178) 0:17:10.872 ******** 2026-04-11 05:27:45.025535 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.025546 | orchestrator | 2026-04-11 05:27:45.025557 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:27:45.025568 | orchestrator | Saturday 11 April 2026 05:27:15 +0000 (0:00:01.172) 0:17:12.045 ******** 2026-04-11 05:27:45.025579 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.025590 | orchestrator | 2026-04-11 05:27:45.025600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:27:45.025611 | orchestrator | Saturday 11 April 2026 05:27:16 +0000 (0:00:01.161) 0:17:13.206 ******** 2026-04-11 05:27:45.025727 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.025745 | orchestrator | 2026-04-11 05:27:45.025758 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:27:45.025771 | orchestrator | Saturday 11 April 2026 05:27:18 +0000 (0:00:01.141) 0:17:14.348 ******** 2026-04-11 05:27:45.025783 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:27:45.025796 | orchestrator | 2026-04-11 05:27:45.025809 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:27:45.025823 | orchestrator | Saturday 11 April 2026 05:27:18 +0000 (0:00:00.822) 0:17:15.170 ******** 2026-04-11 05:27:45.025835 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-11 05:27:45.025848 | orchestrator | 2026-04-11 05:27:45.025859 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:27:45.025870 | orchestrator | Saturday 11 April 2026 05:27:20 +0000 (0:00:01.116) 0:17:16.287 ******** 2026-04-11 05:27:45.025880 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-11 05:27:45.025892 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-11 05:27:45.025903 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-11 05:27:45.025914 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-11 05:27:45.025925 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-11 05:27:45.025936 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-11 05:27:45.025947 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-11 05:27:45.025958 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:27:45.025969 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:27:45.025980 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:27:45.025991 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:27:45.026001 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:27:45.026012 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:27:45.026087 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:27:45.026099 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-11 05:27:45.026132 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-11 05:27:45.026143 | orchestrator | 2026-04-11 05:27:45.026154 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:27:45.026165 | orchestrator | Saturday 11 April 2026 05:27:26 +0000 (0:00:06.487) 0:17:22.775 ******** 2026-04-11 05:27:45.026176 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026187 | orchestrator | 2026-04-11 05:27:45.026198 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:27:45.026218 | orchestrator | Saturday 11 April 2026 05:27:27 +0000 (0:00:00.774) 0:17:23.549 ******** 2026-04-11 05:27:45.026242 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026253 | orchestrator | 2026-04-11 05:27:45.026264 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:27:45.026274 | orchestrator | Saturday 11 April 2026 05:27:28 +0000 (0:00:00.833) 0:17:24.383 ******** 2026-04-11 05:27:45.026285 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026296 | orchestrator | 2026-04-11 05:27:45.026307 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:27:45.026317 | orchestrator | Saturday 11 April 2026 05:27:28 +0000 (0:00:00.784) 0:17:25.167 ******** 2026-04-11 05:27:45.026328 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026339 | orchestrator | 2026-04-11 05:27:45.026350 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:27:45.026379 | orchestrator | Saturday 11 April 2026 05:27:29 +0000 (0:00:00.871) 0:17:26.039 ******** 2026-04-11 05:27:45.026390 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026401 | orchestrator | 2026-04-11 05:27:45.026412 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:27:45.026423 | orchestrator | Saturday 11 April 2026 05:27:30 +0000 (0:00:00.786) 0:17:26.826 ******** 2026-04-11 05:27:45.026434 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026445 | orchestrator | 2026-04-11 05:27:45.026456 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:27:45.026467 | orchestrator | Saturday 11 April 2026 05:27:31 +0000 (0:00:00.794) 0:17:27.621 ******** 2026-04-11 05:27:45.026478 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026489 | orchestrator | 2026-04-11 05:27:45.026500 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:27:45.026510 | orchestrator | Saturday 11 April 2026 05:27:32 +0000 (0:00:00.779) 0:17:28.400 ******** 2026-04-11 05:27:45.026521 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026532 | orchestrator | 2026-04-11 05:27:45.026543 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:27:45.026554 | orchestrator | Saturday 11 April 2026 05:27:32 +0000 (0:00:00.788) 0:17:29.188 ******** 2026-04-11 05:27:45.026565 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026576 | orchestrator | 2026-04-11 05:27:45.026587 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:27:45.026598 | orchestrator | Saturday 11 April 2026 05:27:33 +0000 (0:00:00.787) 0:17:29.976 ******** 2026-04-11 05:27:45.026608 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026619 | orchestrator | 2026-04-11 05:27:45.026630 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:27:45.026641 | orchestrator | Saturday 11 April 2026 05:27:34 +0000 (0:00:00.781) 0:17:30.758 ******** 2026-04-11 05:27:45.026652 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026663 | orchestrator | 2026-04-11 05:27:45.026674 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:27:45.026691 | orchestrator | Saturday 11 April 2026 05:27:35 +0000 (0:00:00.800) 0:17:31.558 ******** 2026-04-11 05:27:45.026702 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026713 | orchestrator | 2026-04-11 05:27:45.026724 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:27:45.026735 | orchestrator | Saturday 11 April 2026 05:27:36 +0000 (0:00:00.778) 0:17:32.337 ******** 2026-04-11 05:27:45.026746 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026757 | orchestrator | 2026-04-11 05:27:45.026768 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:27:45.026778 | orchestrator | Saturday 11 April 2026 05:27:37 +0000 (0:00:00.899) 0:17:33.237 ******** 2026-04-11 05:27:45.026789 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026800 | orchestrator | 2026-04-11 05:27:45.026811 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:27:45.026829 | orchestrator | Saturday 11 April 2026 05:27:37 +0000 (0:00:00.779) 0:17:34.016 ******** 2026-04-11 05:27:45.026840 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026851 | orchestrator | 2026-04-11 05:27:45.026861 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:27:45.026872 | orchestrator | Saturday 11 April 2026 05:27:38 +0000 (0:00:00.909) 0:17:34.925 ******** 2026-04-11 05:27:45.026883 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026894 | orchestrator | 2026-04-11 05:27:45.026905 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:27:45.026916 | orchestrator | Saturday 11 April 2026 05:27:39 +0000 (0:00:00.791) 0:17:35.717 ******** 2026-04-11 05:27:45.026926 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026937 | orchestrator | 2026-04-11 05:27:45.026948 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:27:45.026961 | orchestrator | Saturday 11 April 2026 05:27:40 +0000 (0:00:00.803) 0:17:36.521 ******** 2026-04-11 05:27:45.026972 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.026982 | orchestrator | 2026-04-11 05:27:45.026993 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:27:45.027004 | orchestrator | Saturday 11 April 2026 05:27:41 +0000 (0:00:00.867) 0:17:37.389 ******** 2026-04-11 05:27:45.027015 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.027026 | orchestrator | 2026-04-11 05:27:45.027036 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:27:45.027048 | orchestrator | Saturday 11 April 2026 05:27:42 +0000 (0:00:00.857) 0:17:38.246 ******** 2026-04-11 05:27:45.027058 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.027069 | orchestrator | 2026-04-11 05:27:45.027080 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:27:45.027091 | orchestrator | Saturday 11 April 2026 05:27:42 +0000 (0:00:00.842) 0:17:39.089 ******** 2026-04-11 05:27:45.027153 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.027165 | orchestrator | 2026-04-11 05:27:45.027176 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:27:45.027187 | orchestrator | Saturday 11 April 2026 05:27:43 +0000 (0:00:00.841) 0:17:39.930 ******** 2026-04-11 05:27:45.027198 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:27:45.027209 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:27:45.027220 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:27:45.027230 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:27:45.027241 | orchestrator | 2026-04-11 05:27:45.027252 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:27:45.027263 | orchestrator | Saturday 11 April 2026 05:27:44 +0000 (0:00:01.069) 0:17:41.000 ******** 2026-04-11 05:27:45.027274 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:27:45.027292 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:29:05.585628 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:29:05.585745 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.585761 | orchestrator | 2026-04-11 05:29:05.585774 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:29:05.585786 | orchestrator | Saturday 11 April 2026 05:27:45 +0000 (0:00:01.037) 0:17:42.038 ******** 2026-04-11 05:29:05.585797 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:29:05.585808 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:29:05.585820 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:29:05.585830 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.585841 | orchestrator | 2026-04-11 05:29:05.585853 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:29:05.585887 | orchestrator | Saturday 11 April 2026 05:27:46 +0000 (0:00:01.067) 0:17:43.105 ******** 2026-04-11 05:29:05.585898 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.585909 | orchestrator | 2026-04-11 05:29:05.585920 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:29:05.585931 | orchestrator | Saturday 11 April 2026 05:27:47 +0000 (0:00:00.790) 0:17:43.896 ******** 2026-04-11 05:29:05.585942 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-11 05:29:05.585953 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.585964 | orchestrator | 2026-04-11 05:29:05.585975 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:29:05.585986 | orchestrator | Saturday 11 April 2026 05:27:48 +0000 (0:00:00.930) 0:17:44.826 ******** 2026-04-11 05:29:05.585996 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:29:05.586007 | orchestrator | 2026-04-11 05:29:05.586076 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:29:05.586089 | orchestrator | Saturday 11 April 2026 05:27:50 +0000 (0:00:01.435) 0:17:46.262 ******** 2026-04-11 05:29:05.586100 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586140 | orchestrator | 2026-04-11 05:29:05.586159 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-11 05:29:05.586212 | orchestrator | Saturday 11 April 2026 05:27:50 +0000 (0:00:00.851) 0:17:47.113 ******** 2026-04-11 05:29:05.586230 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-04-11 05:29:05.586245 | orchestrator | 2026-04-11 05:29:05.586258 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-11 05:29:05.586272 | orchestrator | Saturday 11 April 2026 05:27:52 +0000 (0:00:01.250) 0:17:48.363 ******** 2026-04-11 05:29:05.586284 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586297 | orchestrator | 2026-04-11 05:29:05.586308 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-11 05:29:05.586318 | orchestrator | Saturday 11 April 2026 05:27:55 +0000 (0:00:03.536) 0:17:51.900 ******** 2026-04-11 05:29:05.586329 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.586340 | orchestrator | 2026-04-11 05:29:05.586351 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-11 05:29:05.586362 | orchestrator | Saturday 11 April 2026 05:27:56 +0000 (0:00:01.155) 0:17:53.056 ******** 2026-04-11 05:29:05.586373 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586383 | orchestrator | 2026-04-11 05:29:05.586394 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-11 05:29:05.586405 | orchestrator | Saturday 11 April 2026 05:27:58 +0000 (0:00:01.265) 0:17:54.322 ******** 2026-04-11 05:29:05.586415 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586426 | orchestrator | 2026-04-11 05:29:05.586437 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-11 05:29:05.586447 | orchestrator | Saturday 11 April 2026 05:27:59 +0000 (0:00:01.152) 0:17:55.475 ******** 2026-04-11 05:29:05.586458 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:29:05.586469 | orchestrator | 2026-04-11 05:29:05.586479 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-11 05:29:05.586490 | orchestrator | Saturday 11 April 2026 05:28:01 +0000 (0:00:02.155) 0:17:57.630 ******** 2026-04-11 05:29:05.586501 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586511 | orchestrator | 2026-04-11 05:29:05.586522 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-11 05:29:05.586533 | orchestrator | Saturday 11 April 2026 05:28:03 +0000 (0:00:01.688) 0:17:59.319 ******** 2026-04-11 05:29:05.586543 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586554 | orchestrator | 2026-04-11 05:29:05.586565 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-11 05:29:05.586576 | orchestrator | Saturday 11 April 2026 05:28:04 +0000 (0:00:01.467) 0:18:00.786 ******** 2026-04-11 05:29:05.586586 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586608 | orchestrator | 2026-04-11 05:29:05.586619 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-11 05:29:05.586630 | orchestrator | Saturday 11 April 2026 05:28:06 +0000 (0:00:01.528) 0:18:02.315 ******** 2026-04-11 05:29:05.586640 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:29:05.586651 | orchestrator | 2026-04-11 05:29:05.586661 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-11 05:29:05.586672 | orchestrator | Saturday 11 April 2026 05:28:07 +0000 (0:00:01.572) 0:18:03.887 ******** 2026-04-11 05:29:05.586683 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:29:05.586693 | orchestrator | 2026-04-11 05:29:05.586704 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-11 05:29:05.586715 | orchestrator | Saturday 11 April 2026 05:28:09 +0000 (0:00:01.540) 0:18:05.427 ******** 2026-04-11 05:29:05.586725 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:29:05.586736 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-11 05:29:05.586747 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-11 05:29:05.586758 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-11 05:29:05.586769 | orchestrator | 2026-04-11 05:29:05.586799 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-11 05:29:05.586810 | orchestrator | Saturday 11 April 2026 05:28:13 +0000 (0:00:04.090) 0:18:09.519 ******** 2026-04-11 05:29:05.586821 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:29:05.586833 | orchestrator | 2026-04-11 05:29:05.586843 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-11 05:29:05.586854 | orchestrator | Saturday 11 April 2026 05:28:15 +0000 (0:00:02.053) 0:18:11.572 ******** 2026-04-11 05:29:05.586865 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586876 | orchestrator | 2026-04-11 05:29:05.586886 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-11 05:29:05.586897 | orchestrator | Saturday 11 April 2026 05:28:16 +0000 (0:00:01.130) 0:18:12.703 ******** 2026-04-11 05:29:05.586908 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586918 | orchestrator | 2026-04-11 05:29:05.586929 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-11 05:29:05.586939 | orchestrator | Saturday 11 April 2026 05:28:17 +0000 (0:00:01.118) 0:18:13.821 ******** 2026-04-11 05:29:05.586950 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.586961 | orchestrator | 2026-04-11 05:29:05.586971 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-11 05:29:05.586982 | orchestrator | Saturday 11 April 2026 05:28:19 +0000 (0:00:01.702) 0:18:15.523 ******** 2026-04-11 05:29:05.586993 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.587003 | orchestrator | 2026-04-11 05:29:05.587014 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-11 05:29:05.587025 | orchestrator | Saturday 11 April 2026 05:28:20 +0000 (0:00:01.472) 0:18:16.996 ******** 2026-04-11 05:29:05.587035 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.587046 | orchestrator | 2026-04-11 05:29:05.587057 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-11 05:29:05.587068 | orchestrator | Saturday 11 April 2026 05:28:21 +0000 (0:00:00.756) 0:18:17.752 ******** 2026-04-11 05:29:05.587078 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-04-11 05:29:05.587089 | orchestrator | 2026-04-11 05:29:05.587100 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-11 05:29:05.587152 | orchestrator | Saturday 11 April 2026 05:28:22 +0000 (0:00:01.142) 0:18:18.894 ******** 2026-04-11 05:29:05.587165 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.587176 | orchestrator | 2026-04-11 05:29:05.587187 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-11 05:29:05.587198 | orchestrator | Saturday 11 April 2026 05:28:23 +0000 (0:00:01.110) 0:18:20.004 ******** 2026-04-11 05:29:05.587216 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.587227 | orchestrator | 2026-04-11 05:29:05.587238 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-11 05:29:05.587249 | orchestrator | Saturday 11 April 2026 05:28:24 +0000 (0:00:01.109) 0:18:21.114 ******** 2026-04-11 05:29:05.587260 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-04-11 05:29:05.587272 | orchestrator | 2026-04-11 05:29:05.587283 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-11 05:29:05.587294 | orchestrator | Saturday 11 April 2026 05:28:26 +0000 (0:00:01.115) 0:18:22.230 ******** 2026-04-11 05:29:05.587305 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:29:05.587316 | orchestrator | 2026-04-11 05:29:05.587327 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-11 05:29:05.587338 | orchestrator | Saturday 11 April 2026 05:28:28 +0000 (0:00:02.692) 0:18:24.922 ******** 2026-04-11 05:29:05.587349 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.587361 | orchestrator | 2026-04-11 05:29:05.587371 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-11 05:29:05.587383 | orchestrator | Saturday 11 April 2026 05:28:30 +0000 (0:00:02.021) 0:18:26.944 ******** 2026-04-11 05:29:05.587394 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.587405 | orchestrator | 2026-04-11 05:29:05.587416 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-11 05:29:05.587427 | orchestrator | Saturday 11 April 2026 05:28:33 +0000 (0:00:02.444) 0:18:29.388 ******** 2026-04-11 05:29:05.587438 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:29:05.587449 | orchestrator | 2026-04-11 05:29:05.587460 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-11 05:29:05.587471 | orchestrator | Saturday 11 April 2026 05:28:36 +0000 (0:00:02.890) 0:18:32.279 ******** 2026-04-11 05:29:05.587482 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-04-11 05:29:05.587493 | orchestrator | 2026-04-11 05:29:05.587504 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-11 05:29:05.587515 | orchestrator | Saturday 11 April 2026 05:28:37 +0000 (0:00:01.140) 0:18:33.419 ******** 2026-04-11 05:29:05.587526 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-11 05:29:05.587538 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.587549 | orchestrator | 2026-04-11 05:29:05.587560 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-11 05:29:05.587571 | orchestrator | Saturday 11 April 2026 05:29:00 +0000 (0:00:22.905) 0:18:56.324 ******** 2026-04-11 05:29:05.587582 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:05.587593 | orchestrator | 2026-04-11 05:29:05.587604 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-11 05:29:05.587615 | orchestrator | Saturday 11 April 2026 05:29:02 +0000 (0:00:02.823) 0:18:59.147 ******** 2026-04-11 05:29:05.587626 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:05.587638 | orchestrator | 2026-04-11 05:29:05.587649 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-11 05:29:05.587660 | orchestrator | Saturday 11 April 2026 05:29:03 +0000 (0:00:00.774) 0:18:59.922 ******** 2026-04-11 05:29:05.587683 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-11 05:29:46.114696 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-11 05:29:46.114865 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-11 05:29:46.114886 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-11 05:29:46.114916 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-11 05:29:46.114930 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5c5e65acaadd167be1af87e03ebe92c7ebd59f87'}])  2026-04-11 05:29:46.114944 | orchestrator | 2026-04-11 05:29:46.114956 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-11 05:29:46.114968 | orchestrator | Saturday 11 April 2026 05:29:13 +0000 (0:00:09.587) 0:19:09.509 ******** 2026-04-11 05:29:46.114980 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:29:46.114992 | orchestrator | 2026-04-11 05:29:46.115003 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:29:46.115014 | orchestrator | Saturday 11 April 2026 05:29:15 +0000 (0:00:02.093) 0:19:11.603 ******** 2026-04-11 05:29:46.115025 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:29:46.115037 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-11 05:29:46.115047 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-11 05:29:46.115058 | orchestrator | 2026-04-11 05:29:46.115069 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:29:46.115080 | orchestrator | Saturday 11 April 2026 05:29:17 +0000 (0:00:01.852) 0:19:13.456 ******** 2026-04-11 05:29:46.115091 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 05:29:46.115102 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 05:29:46.115196 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 05:29:46.115211 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115223 | orchestrator | 2026-04-11 05:29:46.115236 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-11 05:29:46.115249 | orchestrator | Saturday 11 April 2026 05:29:18 +0000 (0:00:01.594) 0:19:15.050 ******** 2026-04-11 05:29:46.115262 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115274 | orchestrator | 2026-04-11 05:29:46.115287 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-11 05:29:46.115299 | orchestrator | Saturday 11 April 2026 05:29:19 +0000 (0:00:00.761) 0:19:15.812 ******** 2026-04-11 05:29:46.115312 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:46.115325 | orchestrator | 2026-04-11 05:29:46.115338 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 05:29:46.115361 | orchestrator | Saturday 11 April 2026 05:29:21 +0000 (0:00:01.988) 0:19:17.800 ******** 2026-04-11 05:29:46.115373 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115386 | orchestrator | 2026-04-11 05:29:46.115398 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 05:29:46.115411 | orchestrator | Saturday 11 April 2026 05:29:22 +0000 (0:00:00.790) 0:19:18.591 ******** 2026-04-11 05:29:46.115423 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115436 | orchestrator | 2026-04-11 05:29:46.115449 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 05:29:46.115461 | orchestrator | Saturday 11 April 2026 05:29:23 +0000 (0:00:00.824) 0:19:19.416 ******** 2026-04-11 05:29:46.115474 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115486 | orchestrator | 2026-04-11 05:29:46.115499 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 05:29:46.115530 | orchestrator | Saturday 11 April 2026 05:29:24 +0000 (0:00:00.824) 0:19:20.240 ******** 2026-04-11 05:29:46.115545 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115557 | orchestrator | 2026-04-11 05:29:46.115569 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 05:29:46.115581 | orchestrator | Saturday 11 April 2026 05:29:24 +0000 (0:00:00.765) 0:19:21.006 ******** 2026-04-11 05:29:46.115594 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115606 | orchestrator | 2026-04-11 05:29:46.115618 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-11 05:29:46.115631 | orchestrator | Saturday 11 April 2026 05:29:25 +0000 (0:00:00.824) 0:19:21.830 ******** 2026-04-11 05:29:46.115644 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115657 | orchestrator | 2026-04-11 05:29:46.115669 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 05:29:46.115680 | orchestrator | Saturday 11 April 2026 05:29:26 +0000 (0:00:00.804) 0:19:22.635 ******** 2026-04-11 05:29:46.115690 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:29:46.115701 | orchestrator | 2026-04-11 05:29:46.115712 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-04-11 05:29:46.115723 | orchestrator | 2026-04-11 05:29:46.115733 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-04-11 05:29:46.115744 | orchestrator | Saturday 11 April 2026 05:29:27 +0000 (0:00:01.483) 0:19:24.119 ******** 2026-04-11 05:29:46.115755 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:29:46.115765 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:29:46.115776 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:29:46.115787 | orchestrator | 2026-04-11 05:29:46.115798 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-11 05:29:46.115808 | orchestrator | 2026-04-11 05:29:46.115819 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-11 05:29:46.115836 | orchestrator | Saturday 11 April 2026 05:29:30 +0000 (0:00:02.181) 0:19:26.301 ******** 2026-04-11 05:29:46.115847 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.115858 | orchestrator | 2026-04-11 05:29:46.115869 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:29:46.115883 | orchestrator | Saturday 11 April 2026 05:29:31 +0000 (0:00:01.142) 0:19:27.443 ******** 2026-04-11 05:29:46.115902 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.115921 | orchestrator | 2026-04-11 05:29:46.115940 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:29:46.115951 | orchestrator | Saturday 11 April 2026 05:29:32 +0000 (0:00:01.139) 0:19:28.582 ******** 2026-04-11 05:29:46.115962 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.115972 | orchestrator | 2026-04-11 05:29:46.115983 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:29:46.115994 | orchestrator | Saturday 11 April 2026 05:29:33 +0000 (0:00:01.173) 0:19:29.756 ******** 2026-04-11 05:29:46.116004 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116015 | orchestrator | 2026-04-11 05:29:46.116034 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:29:46.116044 | orchestrator | Saturday 11 April 2026 05:29:34 +0000 (0:00:01.143) 0:19:30.900 ******** 2026-04-11 05:29:46.116055 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116066 | orchestrator | 2026-04-11 05:29:46.116077 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:29:46.116087 | orchestrator | Saturday 11 April 2026 05:29:35 +0000 (0:00:01.142) 0:19:32.042 ******** 2026-04-11 05:29:46.116098 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116109 | orchestrator | 2026-04-11 05:29:46.116149 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:29:46.116159 | orchestrator | Saturday 11 April 2026 05:29:37 +0000 (0:00:01.179) 0:19:33.222 ******** 2026-04-11 05:29:46.116170 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116181 | orchestrator | 2026-04-11 05:29:46.116192 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:29:46.116203 | orchestrator | Saturday 11 April 2026 05:29:38 +0000 (0:00:01.164) 0:19:34.387 ******** 2026-04-11 05:29:46.116213 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116224 | orchestrator | 2026-04-11 05:29:46.116235 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:29:46.116246 | orchestrator | Saturday 11 April 2026 05:29:39 +0000 (0:00:01.102) 0:19:35.489 ******** 2026-04-11 05:29:46.116256 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116267 | orchestrator | 2026-04-11 05:29:46.116278 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:29:46.116289 | orchestrator | Saturday 11 April 2026 05:29:40 +0000 (0:00:01.120) 0:19:36.610 ******** 2026-04-11 05:29:46.116299 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116310 | orchestrator | 2026-04-11 05:29:46.116321 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:29:46.116332 | orchestrator | Saturday 11 April 2026 05:29:41 +0000 (0:00:01.106) 0:19:37.716 ******** 2026-04-11 05:29:46.116342 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116353 | orchestrator | 2026-04-11 05:29:46.116364 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:29:46.116375 | orchestrator | Saturday 11 April 2026 05:29:42 +0000 (0:00:01.157) 0:19:38.874 ******** 2026-04-11 05:29:46.116386 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116396 | orchestrator | 2026-04-11 05:29:46.116407 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:29:46.116418 | orchestrator | Saturday 11 April 2026 05:29:43 +0000 (0:00:01.125) 0:19:40.000 ******** 2026-04-11 05:29:46.116429 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116439 | orchestrator | 2026-04-11 05:29:46.116450 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:29:46.116461 | orchestrator | Saturday 11 April 2026 05:29:44 +0000 (0:00:01.167) 0:19:41.167 ******** 2026-04-11 05:29:46.116472 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:29:46.116483 | orchestrator | 2026-04-11 05:29:46.116493 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:29:46.116504 | orchestrator | Saturday 11 April 2026 05:29:46 +0000 (0:00:01.107) 0:19:42.275 ******** 2026-04-11 05:29:46.116524 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.472927 | orchestrator | 2026-04-11 05:30:32.473051 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:30:32.473068 | orchestrator | Saturday 11 April 2026 05:29:47 +0000 (0:00:01.098) 0:19:43.374 ******** 2026-04-11 05:30:32.473081 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473094 | orchestrator | 2026-04-11 05:30:32.473105 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:30:32.473173 | orchestrator | Saturday 11 April 2026 05:29:48 +0000 (0:00:01.121) 0:19:44.495 ******** 2026-04-11 05:30:32.473188 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473222 | orchestrator | 2026-04-11 05:30:32.473234 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:30:32.473245 | orchestrator | Saturday 11 April 2026 05:29:49 +0000 (0:00:01.118) 0:19:45.614 ******** 2026-04-11 05:30:32.473256 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473274 | orchestrator | 2026-04-11 05:30:32.473293 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:30:32.473312 | orchestrator | Saturday 11 April 2026 05:29:50 +0000 (0:00:01.147) 0:19:46.761 ******** 2026-04-11 05:30:32.473329 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473347 | orchestrator | 2026-04-11 05:30:32.473365 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:30:32.473385 | orchestrator | Saturday 11 April 2026 05:29:51 +0000 (0:00:01.107) 0:19:47.868 ******** 2026-04-11 05:30:32.473403 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473421 | orchestrator | 2026-04-11 05:30:32.473441 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:30:32.473459 | orchestrator | Saturday 11 April 2026 05:29:52 +0000 (0:00:01.143) 0:19:49.012 ******** 2026-04-11 05:30:32.473477 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473496 | orchestrator | 2026-04-11 05:30:32.473532 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:30:32.473553 | orchestrator | Saturday 11 April 2026 05:29:53 +0000 (0:00:01.151) 0:19:50.163 ******** 2026-04-11 05:30:32.473573 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473594 | orchestrator | 2026-04-11 05:30:32.473613 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:30:32.473630 | orchestrator | Saturday 11 April 2026 05:29:55 +0000 (0:00:01.148) 0:19:51.311 ******** 2026-04-11 05:30:32.473643 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473656 | orchestrator | 2026-04-11 05:30:32.473669 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:30:32.473682 | orchestrator | Saturday 11 April 2026 05:29:56 +0000 (0:00:01.179) 0:19:52.491 ******** 2026-04-11 05:30:32.473694 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473707 | orchestrator | 2026-04-11 05:30:32.473719 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:30:32.473732 | orchestrator | Saturday 11 April 2026 05:29:57 +0000 (0:00:01.140) 0:19:53.631 ******** 2026-04-11 05:30:32.473744 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473757 | orchestrator | 2026-04-11 05:30:32.473769 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:30:32.473781 | orchestrator | Saturday 11 April 2026 05:29:58 +0000 (0:00:01.147) 0:19:54.779 ******** 2026-04-11 05:30:32.473795 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473808 | orchestrator | 2026-04-11 05:30:32.473820 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:30:32.473834 | orchestrator | Saturday 11 April 2026 05:29:59 +0000 (0:00:01.133) 0:19:55.912 ******** 2026-04-11 05:30:32.473846 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473858 | orchestrator | 2026-04-11 05:30:32.473868 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:30:32.473879 | orchestrator | Saturday 11 April 2026 05:30:00 +0000 (0:00:01.136) 0:19:57.049 ******** 2026-04-11 05:30:32.473890 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473901 | orchestrator | 2026-04-11 05:30:32.473912 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:30:32.473923 | orchestrator | Saturday 11 April 2026 05:30:01 +0000 (0:00:01.115) 0:19:58.164 ******** 2026-04-11 05:30:32.473934 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.473945 | orchestrator | 2026-04-11 05:30:32.473956 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:30:32.473966 | orchestrator | Saturday 11 April 2026 05:30:03 +0000 (0:00:01.159) 0:19:59.324 ******** 2026-04-11 05:30:32.473977 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474000 | orchestrator | 2026-04-11 05:30:32.474011 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:30:32.474085 | orchestrator | Saturday 11 April 2026 05:30:04 +0000 (0:00:01.145) 0:20:00.469 ******** 2026-04-11 05:30:32.474096 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474107 | orchestrator | 2026-04-11 05:30:32.474144 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:30:32.474157 | orchestrator | Saturday 11 April 2026 05:30:05 +0000 (0:00:01.123) 0:20:01.593 ******** 2026-04-11 05:30:32.474168 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474178 | orchestrator | 2026-04-11 05:30:32.474189 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:30:32.474200 | orchestrator | Saturday 11 April 2026 05:30:06 +0000 (0:00:01.153) 0:20:02.747 ******** 2026-04-11 05:30:32.474210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474221 | orchestrator | 2026-04-11 05:30:32.474231 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:30:32.474242 | orchestrator | Saturday 11 April 2026 05:30:07 +0000 (0:00:01.092) 0:20:03.840 ******** 2026-04-11 05:30:32.474253 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474273 | orchestrator | 2026-04-11 05:30:32.474284 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:30:32.474295 | orchestrator | Saturday 11 April 2026 05:30:08 +0000 (0:00:01.126) 0:20:04.966 ******** 2026-04-11 05:30:32.474306 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474316 | orchestrator | 2026-04-11 05:30:32.474348 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:30:32.474359 | orchestrator | Saturday 11 April 2026 05:30:09 +0000 (0:00:01.097) 0:20:06.064 ******** 2026-04-11 05:30:32.474370 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474381 | orchestrator | 2026-04-11 05:30:32.474392 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:30:32.474403 | orchestrator | Saturday 11 April 2026 05:30:10 +0000 (0:00:01.110) 0:20:07.175 ******** 2026-04-11 05:30:32.474413 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474424 | orchestrator | 2026-04-11 05:30:32.474435 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:30:32.474445 | orchestrator | Saturday 11 April 2026 05:30:12 +0000 (0:00:01.135) 0:20:08.311 ******** 2026-04-11 05:30:32.474456 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474467 | orchestrator | 2026-04-11 05:30:32.474477 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:30:32.474488 | orchestrator | Saturday 11 April 2026 05:30:13 +0000 (0:00:01.143) 0:20:09.455 ******** 2026-04-11 05:30:32.474499 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474509 | orchestrator | 2026-04-11 05:30:32.474520 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:30:32.474532 | orchestrator | Saturday 11 April 2026 05:30:14 +0000 (0:00:01.151) 0:20:10.606 ******** 2026-04-11 05:30:32.474543 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474554 | orchestrator | 2026-04-11 05:30:32.474564 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:30:32.474580 | orchestrator | Saturday 11 April 2026 05:30:15 +0000 (0:00:01.140) 0:20:11.747 ******** 2026-04-11 05:30:32.474599 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474617 | orchestrator | 2026-04-11 05:30:32.474643 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:30:32.474662 | orchestrator | Saturday 11 April 2026 05:30:16 +0000 (0:00:01.139) 0:20:12.887 ******** 2026-04-11 05:30:32.474680 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474698 | orchestrator | 2026-04-11 05:30:32.474733 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:30:32.474766 | orchestrator | Saturday 11 April 2026 05:30:17 +0000 (0:00:01.137) 0:20:14.024 ******** 2026-04-11 05:30:32.474785 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474804 | orchestrator | 2026-04-11 05:30:32.474819 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:30:32.474838 | orchestrator | Saturday 11 April 2026 05:30:19 +0000 (0:00:01.224) 0:20:15.248 ******** 2026-04-11 05:30:32.474856 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474875 | orchestrator | 2026-04-11 05:30:32.474894 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:30:32.474913 | orchestrator | Saturday 11 April 2026 05:30:20 +0000 (0:00:01.130) 0:20:16.379 ******** 2026-04-11 05:30:32.474932 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.474951 | orchestrator | 2026-04-11 05:30:32.474969 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:30:32.474986 | orchestrator | Saturday 11 April 2026 05:30:21 +0000 (0:00:01.205) 0:20:17.585 ******** 2026-04-11 05:30:32.474997 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475008 | orchestrator | 2026-04-11 05:30:32.475019 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:30:32.475030 | orchestrator | Saturday 11 April 2026 05:30:22 +0000 (0:00:01.611) 0:20:19.197 ******** 2026-04-11 05:30:32.475040 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475051 | orchestrator | 2026-04-11 05:30:32.475062 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:30:32.475072 | orchestrator | Saturday 11 April 2026 05:30:24 +0000 (0:00:01.142) 0:20:20.339 ******** 2026-04-11 05:30:32.475083 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475094 | orchestrator | 2026-04-11 05:30:32.475105 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:30:32.475141 | orchestrator | Saturday 11 April 2026 05:30:25 +0000 (0:00:01.258) 0:20:21.597 ******** 2026-04-11 05:30:32.475157 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475168 | orchestrator | 2026-04-11 05:30:32.475179 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:30:32.475190 | orchestrator | Saturday 11 April 2026 05:30:26 +0000 (0:00:01.164) 0:20:22.762 ******** 2026-04-11 05:30:32.475201 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475211 | orchestrator | 2026-04-11 05:30:32.475222 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:30:32.475235 | orchestrator | Saturday 11 April 2026 05:30:27 +0000 (0:00:01.126) 0:20:23.889 ******** 2026-04-11 05:30:32.475246 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475257 | orchestrator | 2026-04-11 05:30:32.475267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:30:32.475278 | orchestrator | Saturday 11 April 2026 05:30:28 +0000 (0:00:01.209) 0:20:25.098 ******** 2026-04-11 05:30:32.475289 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475300 | orchestrator | 2026-04-11 05:30:32.475310 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:30:32.475321 | orchestrator | Saturday 11 April 2026 05:30:30 +0000 (0:00:01.125) 0:20:26.224 ******** 2026-04-11 05:30:32.475332 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475342 | orchestrator | 2026-04-11 05:30:32.475353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:30:32.475364 | orchestrator | Saturday 11 April 2026 05:30:31 +0000 (0:00:01.184) 0:20:27.408 ******** 2026-04-11 05:30:32.475375 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:30:32.475385 | orchestrator | 2026-04-11 05:30:32.475396 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:30:32.475407 | orchestrator | Saturday 11 April 2026 05:30:32 +0000 (0:00:01.120) 0:20:28.529 ******** 2026-04-11 05:30:32.475428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:31:06.885398 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:31:06.885542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:31:06.885559 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.885571 | orchestrator | 2026-04-11 05:31:06.885583 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:31:06.885595 | orchestrator | Saturday 11 April 2026 05:30:33 +0000 (0:00:01.422) 0:20:29.952 ******** 2026-04-11 05:31:06.885606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:31:06.885617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:31:06.885628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:31:06.885639 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.885650 | orchestrator | 2026-04-11 05:31:06.885661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:31:06.885672 | orchestrator | Saturday 11 April 2026 05:30:35 +0000 (0:00:01.378) 0:20:31.331 ******** 2026-04-11 05:31:06.885683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:31:06.885694 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:31:06.885704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:31:06.885715 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.885726 | orchestrator | 2026-04-11 05:31:06.885736 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:31:06.885747 | orchestrator | Saturday 11 April 2026 05:30:36 +0000 (0:00:01.760) 0:20:33.091 ******** 2026-04-11 05:31:06.885758 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.885769 | orchestrator | 2026-04-11 05:31:06.885794 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:31:06.885805 | orchestrator | Saturday 11 April 2026 05:30:37 +0000 (0:00:01.120) 0:20:34.211 ******** 2026-04-11 05:31:06.885817 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-11 05:31:06.885828 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.885838 | orchestrator | 2026-04-11 05:31:06.885849 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:31:06.885860 | orchestrator | Saturday 11 April 2026 05:30:39 +0000 (0:00:01.701) 0:20:35.913 ******** 2026-04-11 05:31:06.885870 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.885881 | orchestrator | 2026-04-11 05:31:06.885891 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:31:06.885902 | orchestrator | Saturday 11 April 2026 05:30:40 +0000 (0:00:01.128) 0:20:37.041 ******** 2026-04-11 05:31:06.885913 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 05:31:06.885923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 05:31:06.885936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 05:31:06.885949 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.885961 | orchestrator | 2026-04-11 05:31:06.885973 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-11 05:31:06.885986 | orchestrator | Saturday 11 April 2026 05:30:42 +0000 (0:00:01.409) 0:20:38.451 ******** 2026-04-11 05:31:06.886000 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.886012 | orchestrator | 2026-04-11 05:31:06.886087 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-11 05:31:06.886101 | orchestrator | Saturday 11 April 2026 05:30:43 +0000 (0:00:01.138) 0:20:39.589 ******** 2026-04-11 05:31:06.886114 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.886154 | orchestrator | 2026-04-11 05:31:06.886166 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-11 05:31:06.886179 | orchestrator | Saturday 11 April 2026 05:30:44 +0000 (0:00:01.135) 0:20:40.725 ******** 2026-04-11 05:31:06.886191 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.886204 | orchestrator | 2026-04-11 05:31:06.886216 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-11 05:31:06.886238 | orchestrator | Saturday 11 April 2026 05:30:45 +0000 (0:00:01.152) 0:20:41.877 ******** 2026-04-11 05:31:06.886251 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:31:06.886264 | orchestrator | 2026-04-11 05:31:06.886276 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-11 05:31:06.886288 | orchestrator | 2026-04-11 05:31:06.886299 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-11 05:31:06.886310 | orchestrator | Saturday 11 April 2026 05:30:46 +0000 (0:00:00.993) 0:20:42.871 ******** 2026-04-11 05:31:06.886321 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886332 | orchestrator | 2026-04-11 05:31:06.886342 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:31:06.886353 | orchestrator | Saturday 11 April 2026 05:30:47 +0000 (0:00:00.809) 0:20:43.680 ******** 2026-04-11 05:31:06.886364 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886375 | orchestrator | 2026-04-11 05:31:06.886385 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:31:06.886396 | orchestrator | Saturday 11 April 2026 05:30:48 +0000 (0:00:00.889) 0:20:44.570 ******** 2026-04-11 05:31:06.886407 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886418 | orchestrator | 2026-04-11 05:31:06.886428 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:31:06.886439 | orchestrator | Saturday 11 April 2026 05:30:49 +0000 (0:00:00.868) 0:20:45.439 ******** 2026-04-11 05:31:06.886450 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886461 | orchestrator | 2026-04-11 05:31:06.886471 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:31:06.886482 | orchestrator | Saturday 11 April 2026 05:30:50 +0000 (0:00:00.778) 0:20:46.217 ******** 2026-04-11 05:31:06.886493 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886503 | orchestrator | 2026-04-11 05:31:06.886514 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:31:06.886525 | orchestrator | Saturday 11 April 2026 05:30:50 +0000 (0:00:00.765) 0:20:46.983 ******** 2026-04-11 05:31:06.886536 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886546 | orchestrator | 2026-04-11 05:31:06.886578 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:31:06.886590 | orchestrator | Saturday 11 April 2026 05:30:51 +0000 (0:00:00.779) 0:20:47.762 ******** 2026-04-11 05:31:06.886601 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886612 | orchestrator | 2026-04-11 05:31:06.886622 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:31:06.886633 | orchestrator | Saturday 11 April 2026 05:30:52 +0000 (0:00:00.752) 0:20:48.515 ******** 2026-04-11 05:31:06.886644 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886655 | orchestrator | 2026-04-11 05:31:06.886665 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:31:06.886676 | orchestrator | Saturday 11 April 2026 05:30:53 +0000 (0:00:00.838) 0:20:49.354 ******** 2026-04-11 05:31:06.886687 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886697 | orchestrator | 2026-04-11 05:31:06.886708 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:31:06.886719 | orchestrator | Saturday 11 April 2026 05:30:53 +0000 (0:00:00.788) 0:20:50.143 ******** 2026-04-11 05:31:06.886730 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886741 | orchestrator | 2026-04-11 05:31:06.886752 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:31:06.886762 | orchestrator | Saturday 11 April 2026 05:30:54 +0000 (0:00:00.766) 0:20:50.909 ******** 2026-04-11 05:31:06.886773 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886784 | orchestrator | 2026-04-11 05:31:06.886795 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:31:06.886805 | orchestrator | Saturday 11 April 2026 05:30:55 +0000 (0:00:00.791) 0:20:51.700 ******** 2026-04-11 05:31:06.886829 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886840 | orchestrator | 2026-04-11 05:31:06.886851 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:31:06.886862 | orchestrator | Saturday 11 April 2026 05:30:56 +0000 (0:00:00.794) 0:20:52.495 ******** 2026-04-11 05:31:06.886872 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886883 | orchestrator | 2026-04-11 05:31:06.886894 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:31:06.886905 | orchestrator | Saturday 11 April 2026 05:30:57 +0000 (0:00:00.796) 0:20:53.292 ******** 2026-04-11 05:31:06.886915 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886926 | orchestrator | 2026-04-11 05:31:06.886937 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:31:06.886947 | orchestrator | Saturday 11 April 2026 05:30:57 +0000 (0:00:00.825) 0:20:54.117 ******** 2026-04-11 05:31:06.886958 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.886969 | orchestrator | 2026-04-11 05:31:06.886979 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:31:06.886990 | orchestrator | Saturday 11 April 2026 05:30:58 +0000 (0:00:00.843) 0:20:54.961 ******** 2026-04-11 05:31:06.887001 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887012 | orchestrator | 2026-04-11 05:31:06.887022 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:31:06.887033 | orchestrator | Saturday 11 April 2026 05:30:59 +0000 (0:00:00.802) 0:20:55.763 ******** 2026-04-11 05:31:06.887044 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887055 | orchestrator | 2026-04-11 05:31:06.887065 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:31:06.887076 | orchestrator | Saturday 11 April 2026 05:31:00 +0000 (0:00:00.805) 0:20:56.569 ******** 2026-04-11 05:31:06.887087 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887097 | orchestrator | 2026-04-11 05:31:06.887108 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:31:06.887119 | orchestrator | Saturday 11 April 2026 05:31:01 +0000 (0:00:00.787) 0:20:57.356 ******** 2026-04-11 05:31:06.887150 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887160 | orchestrator | 2026-04-11 05:31:06.887171 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:31:06.887183 | orchestrator | Saturday 11 April 2026 05:31:01 +0000 (0:00:00.823) 0:20:58.179 ******** 2026-04-11 05:31:06.887194 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887204 | orchestrator | 2026-04-11 05:31:06.887215 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:31:06.887226 | orchestrator | Saturday 11 April 2026 05:31:02 +0000 (0:00:00.792) 0:20:58.971 ******** 2026-04-11 05:31:06.887237 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887247 | orchestrator | 2026-04-11 05:31:06.887258 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:31:06.887269 | orchestrator | Saturday 11 April 2026 05:31:03 +0000 (0:00:00.780) 0:20:59.752 ******** 2026-04-11 05:31:06.887280 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887290 | orchestrator | 2026-04-11 05:31:06.887301 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:31:06.887312 | orchestrator | Saturday 11 April 2026 05:31:04 +0000 (0:00:00.777) 0:21:00.529 ******** 2026-04-11 05:31:06.887323 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887333 | orchestrator | 2026-04-11 05:31:06.887344 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:31:06.887355 | orchestrator | Saturday 11 April 2026 05:31:05 +0000 (0:00:00.810) 0:21:01.340 ******** 2026-04-11 05:31:06.887365 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887376 | orchestrator | 2026-04-11 05:31:06.887387 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:31:06.887398 | orchestrator | Saturday 11 April 2026 05:31:05 +0000 (0:00:00.793) 0:21:02.134 ******** 2026-04-11 05:31:06.887416 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887427 | orchestrator | 2026-04-11 05:31:06.887437 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:31:06.887448 | orchestrator | Saturday 11 April 2026 05:31:06 +0000 (0:00:00.793) 0:21:02.927 ******** 2026-04-11 05:31:06.887459 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:06.887470 | orchestrator | 2026-04-11 05:31:06.887487 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:31:36.930896 | orchestrator | Saturday 11 April 2026 05:31:07 +0000 (0:00:00.785) 0:21:03.713 ******** 2026-04-11 05:31:36.931018 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931038 | orchestrator | 2026-04-11 05:31:36.931054 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:31:36.931069 | orchestrator | Saturday 11 April 2026 05:31:08 +0000 (0:00:00.836) 0:21:04.550 ******** 2026-04-11 05:31:36.931083 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931097 | orchestrator | 2026-04-11 05:31:36.931110 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:31:36.931123 | orchestrator | Saturday 11 April 2026 05:31:09 +0000 (0:00:00.782) 0:21:05.332 ******** 2026-04-11 05:31:36.931210 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931224 | orchestrator | 2026-04-11 05:31:36.931238 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:31:36.931252 | orchestrator | Saturday 11 April 2026 05:31:09 +0000 (0:00:00.793) 0:21:06.126 ******** 2026-04-11 05:31:36.931265 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931279 | orchestrator | 2026-04-11 05:31:36.931293 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:31:36.931308 | orchestrator | Saturday 11 April 2026 05:31:10 +0000 (0:00:00.806) 0:21:06.933 ******** 2026-04-11 05:31:36.931321 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931348 | orchestrator | 2026-04-11 05:31:36.931362 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:31:36.931375 | orchestrator | Saturday 11 April 2026 05:31:11 +0000 (0:00:00.807) 0:21:07.740 ******** 2026-04-11 05:31:36.931389 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931402 | orchestrator | 2026-04-11 05:31:36.931433 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:31:36.931447 | orchestrator | Saturday 11 April 2026 05:31:12 +0000 (0:00:00.803) 0:21:08.544 ******** 2026-04-11 05:31:36.931462 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931476 | orchestrator | 2026-04-11 05:31:36.931490 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:31:36.931505 | orchestrator | Saturday 11 April 2026 05:31:13 +0000 (0:00:00.803) 0:21:09.348 ******** 2026-04-11 05:31:36.931520 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931534 | orchestrator | 2026-04-11 05:31:36.931549 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:31:36.931564 | orchestrator | Saturday 11 April 2026 05:31:13 +0000 (0:00:00.757) 0:21:10.105 ******** 2026-04-11 05:31:36.931579 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931593 | orchestrator | 2026-04-11 05:31:36.931608 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:31:36.931623 | orchestrator | Saturday 11 April 2026 05:31:14 +0000 (0:00:00.852) 0:21:10.958 ******** 2026-04-11 05:31:36.931637 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931652 | orchestrator | 2026-04-11 05:31:36.931665 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:31:36.931679 | orchestrator | Saturday 11 April 2026 05:31:15 +0000 (0:00:00.794) 0:21:11.752 ******** 2026-04-11 05:31:36.931693 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931707 | orchestrator | 2026-04-11 05:31:36.931720 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:31:36.931734 | orchestrator | Saturday 11 April 2026 05:31:16 +0000 (0:00:00.772) 0:21:12.525 ******** 2026-04-11 05:31:36.931772 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931786 | orchestrator | 2026-04-11 05:31:36.931800 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:31:36.931814 | orchestrator | Saturday 11 April 2026 05:31:17 +0000 (0:00:00.774) 0:21:13.299 ******** 2026-04-11 05:31:36.931827 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931841 | orchestrator | 2026-04-11 05:31:36.931855 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:31:36.931869 | orchestrator | Saturday 11 April 2026 05:31:17 +0000 (0:00:00.842) 0:21:14.142 ******** 2026-04-11 05:31:36.931883 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931896 | orchestrator | 2026-04-11 05:31:36.931910 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:31:36.931923 | orchestrator | Saturday 11 April 2026 05:31:18 +0000 (0:00:00.796) 0:21:14.938 ******** 2026-04-11 05:31:36.931937 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.931950 | orchestrator | 2026-04-11 05:31:36.931964 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:31:36.931978 | orchestrator | Saturday 11 April 2026 05:31:19 +0000 (0:00:00.772) 0:21:15.710 ******** 2026-04-11 05:31:36.931991 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932004 | orchestrator | 2026-04-11 05:31:36.932018 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:31:36.932031 | orchestrator | Saturday 11 April 2026 05:31:20 +0000 (0:00:00.791) 0:21:16.503 ******** 2026-04-11 05:31:36.932045 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932058 | orchestrator | 2026-04-11 05:31:36.932072 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:31:36.932085 | orchestrator | Saturday 11 April 2026 05:31:21 +0000 (0:00:00.792) 0:21:17.295 ******** 2026-04-11 05:31:36.932099 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932113 | orchestrator | 2026-04-11 05:31:36.932127 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:31:36.932160 | orchestrator | Saturday 11 April 2026 05:31:21 +0000 (0:00:00.761) 0:21:18.057 ******** 2026-04-11 05:31:36.932174 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932187 | orchestrator | 2026-04-11 05:31:36.932201 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:31:36.932214 | orchestrator | Saturday 11 April 2026 05:31:22 +0000 (0:00:00.778) 0:21:18.836 ******** 2026-04-11 05:31:36.932228 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932241 | orchestrator | 2026-04-11 05:31:36.932277 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:31:36.932291 | orchestrator | Saturday 11 April 2026 05:31:23 +0000 (0:00:00.899) 0:21:19.735 ******** 2026-04-11 05:31:36.932303 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932316 | orchestrator | 2026-04-11 05:31:36.932330 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:31:36.932343 | orchestrator | Saturday 11 April 2026 05:31:24 +0000 (0:00:00.815) 0:21:20.551 ******** 2026-04-11 05:31:36.932356 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932370 | orchestrator | 2026-04-11 05:31:36.932383 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:31:36.932397 | orchestrator | Saturday 11 April 2026 05:31:25 +0000 (0:00:00.851) 0:21:21.403 ******** 2026-04-11 05:31:36.932410 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932424 | orchestrator | 2026-04-11 05:31:36.932437 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:31:36.932450 | orchestrator | Saturday 11 April 2026 05:31:25 +0000 (0:00:00.804) 0:21:22.207 ******** 2026-04-11 05:31:36.932463 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932476 | orchestrator | 2026-04-11 05:31:36.932490 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:31:36.932518 | orchestrator | Saturday 11 April 2026 05:31:26 +0000 (0:00:00.806) 0:21:23.013 ******** 2026-04-11 05:31:36.932530 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932541 | orchestrator | 2026-04-11 05:31:36.932553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:31:36.932565 | orchestrator | Saturday 11 April 2026 05:31:27 +0000 (0:00:00.810) 0:21:23.824 ******** 2026-04-11 05:31:36.932583 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932594 | orchestrator | 2026-04-11 05:31:36.932605 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:31:36.932616 | orchestrator | Saturday 11 April 2026 05:31:28 +0000 (0:00:00.853) 0:21:24.678 ******** 2026-04-11 05:31:36.932627 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932638 | orchestrator | 2026-04-11 05:31:36.932649 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:31:36.932660 | orchestrator | Saturday 11 April 2026 05:31:29 +0000 (0:00:00.792) 0:21:25.470 ******** 2026-04-11 05:31:36.932672 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932683 | orchestrator | 2026-04-11 05:31:36.932694 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:31:36.932705 | orchestrator | Saturday 11 April 2026 05:31:30 +0000 (0:00:00.781) 0:21:26.252 ******** 2026-04-11 05:31:36.932716 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:31:36.932728 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:31:36.932739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:31:36.932750 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932761 | orchestrator | 2026-04-11 05:31:36.932772 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:31:36.932783 | orchestrator | Saturday 11 April 2026 05:31:31 +0000 (0:00:01.078) 0:21:27.330 ******** 2026-04-11 05:31:36.932795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:31:36.932806 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:31:36.932818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:31:36.932829 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932840 | orchestrator | 2026-04-11 05:31:36.932851 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:31:36.932862 | orchestrator | Saturday 11 April 2026 05:31:32 +0000 (0:00:01.101) 0:21:28.432 ******** 2026-04-11 05:31:36.932873 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:31:36.932885 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:31:36.932896 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:31:36.932908 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932919 | orchestrator | 2026-04-11 05:31:36.932930 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:31:36.932942 | orchestrator | Saturday 11 April 2026 05:31:33 +0000 (0:00:01.053) 0:21:29.485 ******** 2026-04-11 05:31:36.932954 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.932965 | orchestrator | 2026-04-11 05:31:36.932977 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:31:36.932988 | orchestrator | Saturday 11 April 2026 05:31:34 +0000 (0:00:00.769) 0:21:30.255 ******** 2026-04-11 05:31:36.932999 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-11 05:31:36.933011 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.933022 | orchestrator | 2026-04-11 05:31:36.933033 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:31:36.933045 | orchestrator | Saturday 11 April 2026 05:31:34 +0000 (0:00:00.888) 0:21:31.143 ******** 2026-04-11 05:31:36.933056 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.933067 | orchestrator | 2026-04-11 05:31:36.933086 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:31:36.933098 | orchestrator | Saturday 11 April 2026 05:31:35 +0000 (0:00:00.802) 0:21:31.945 ******** 2026-04-11 05:31:36.933109 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 05:31:36.933121 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 05:31:36.933145 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 05:31:36.933157 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.933169 | orchestrator | 2026-04-11 05:31:36.933180 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-11 05:31:36.933192 | orchestrator | Saturday 11 April 2026 05:31:36 +0000 (0:00:01.036) 0:21:32.982 ******** 2026-04-11 05:31:36.933204 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:31:36.933215 | orchestrator | 2026-04-11 05:31:36.933234 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-11 05:32:09.556973 | orchestrator | Saturday 11 April 2026 05:31:37 +0000 (0:00:00.785) 0:21:33.767 ******** 2026-04-11 05:32:09.557077 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:32:09.557092 | orchestrator | 2026-04-11 05:32:09.557102 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-11 05:32:09.557111 | orchestrator | Saturday 11 April 2026 05:31:38 +0000 (0:00:00.802) 0:21:34.569 ******** 2026-04-11 05:32:09.557120 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:32:09.557129 | orchestrator | 2026-04-11 05:32:09.557138 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-11 05:32:09.557234 | orchestrator | Saturday 11 April 2026 05:31:39 +0000 (0:00:00.787) 0:21:35.357 ******** 2026-04-11 05:32:09.557244 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:32:09.557252 | orchestrator | 2026-04-11 05:32:09.557262 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-11 05:32:09.557270 | orchestrator | 2026-04-11 05:32:09.557279 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-11 05:32:09.557288 | orchestrator | Saturday 11 April 2026 05:31:40 +0000 (0:00:01.010) 0:21:36.367 ******** 2026-04-11 05:32:09.557297 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557306 | orchestrator | 2026-04-11 05:32:09.557315 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:32:09.557324 | orchestrator | Saturday 11 April 2026 05:31:40 +0000 (0:00:00.801) 0:21:37.169 ******** 2026-04-11 05:32:09.557332 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557341 | orchestrator | 2026-04-11 05:32:09.557350 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:32:09.557374 | orchestrator | Saturday 11 April 2026 05:31:41 +0000 (0:00:00.796) 0:21:37.966 ******** 2026-04-11 05:32:09.557383 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557392 | orchestrator | 2026-04-11 05:32:09.557401 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:32:09.557410 | orchestrator | Saturday 11 April 2026 05:31:42 +0000 (0:00:00.772) 0:21:38.738 ******** 2026-04-11 05:32:09.557419 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557428 | orchestrator | 2026-04-11 05:32:09.557437 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:32:09.557445 | orchestrator | Saturday 11 April 2026 05:31:43 +0000 (0:00:00.848) 0:21:39.587 ******** 2026-04-11 05:32:09.557454 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557463 | orchestrator | 2026-04-11 05:32:09.557471 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:32:09.557480 | orchestrator | Saturday 11 April 2026 05:31:44 +0000 (0:00:00.746) 0:21:40.333 ******** 2026-04-11 05:32:09.557489 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557499 | orchestrator | 2026-04-11 05:32:09.557509 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:32:09.557519 | orchestrator | Saturday 11 April 2026 05:31:44 +0000 (0:00:00.788) 0:21:41.122 ******** 2026-04-11 05:32:09.557548 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557559 | orchestrator | 2026-04-11 05:32:09.557569 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:32:09.557579 | orchestrator | Saturday 11 April 2026 05:31:45 +0000 (0:00:00.784) 0:21:41.906 ******** 2026-04-11 05:32:09.557589 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557599 | orchestrator | 2026-04-11 05:32:09.557609 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:32:09.557619 | orchestrator | Saturday 11 April 2026 05:31:46 +0000 (0:00:00.774) 0:21:42.681 ******** 2026-04-11 05:32:09.557629 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557639 | orchestrator | 2026-04-11 05:32:09.557649 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:32:09.557659 | orchestrator | Saturday 11 April 2026 05:31:47 +0000 (0:00:00.786) 0:21:43.467 ******** 2026-04-11 05:32:09.557669 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557679 | orchestrator | 2026-04-11 05:32:09.557689 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:32:09.557699 | orchestrator | Saturday 11 April 2026 05:31:48 +0000 (0:00:00.779) 0:21:44.247 ******** 2026-04-11 05:32:09.557709 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557720 | orchestrator | 2026-04-11 05:32:09.557730 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:32:09.557740 | orchestrator | Saturday 11 April 2026 05:31:48 +0000 (0:00:00.762) 0:21:45.010 ******** 2026-04-11 05:32:09.557750 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557760 | orchestrator | 2026-04-11 05:32:09.557771 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:32:09.557781 | orchestrator | Saturday 11 April 2026 05:31:49 +0000 (0:00:00.818) 0:21:45.828 ******** 2026-04-11 05:32:09.557791 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557802 | orchestrator | 2026-04-11 05:32:09.557811 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:32:09.557822 | orchestrator | Saturday 11 April 2026 05:31:50 +0000 (0:00:00.785) 0:21:46.614 ******** 2026-04-11 05:32:09.557833 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557843 | orchestrator | 2026-04-11 05:32:09.557853 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:32:09.557864 | orchestrator | Saturday 11 April 2026 05:31:51 +0000 (0:00:00.781) 0:21:47.395 ******** 2026-04-11 05:32:09.557874 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557885 | orchestrator | 2026-04-11 05:32:09.557893 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:32:09.557902 | orchestrator | Saturday 11 April 2026 05:31:51 +0000 (0:00:00.781) 0:21:48.177 ******** 2026-04-11 05:32:09.557910 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557919 | orchestrator | 2026-04-11 05:32:09.557928 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:32:09.557936 | orchestrator | Saturday 11 April 2026 05:31:52 +0000 (0:00:00.776) 0:21:48.954 ******** 2026-04-11 05:32:09.557945 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.557954 | orchestrator | 2026-04-11 05:32:09.557978 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:32:09.557988 | orchestrator | Saturday 11 April 2026 05:31:53 +0000 (0:00:00.806) 0:21:49.760 ******** 2026-04-11 05:32:09.557996 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558005 | orchestrator | 2026-04-11 05:32:09.558069 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:32:09.558080 | orchestrator | Saturday 11 April 2026 05:31:54 +0000 (0:00:00.784) 0:21:50.545 ******** 2026-04-11 05:32:09.558089 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558098 | orchestrator | 2026-04-11 05:32:09.558106 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:32:09.558117 | orchestrator | Saturday 11 April 2026 05:31:55 +0000 (0:00:00.785) 0:21:51.331 ******** 2026-04-11 05:32:09.558133 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558160 | orchestrator | 2026-04-11 05:32:09.558169 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:32:09.558178 | orchestrator | Saturday 11 April 2026 05:31:55 +0000 (0:00:00.824) 0:21:52.155 ******** 2026-04-11 05:32:09.558187 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558196 | orchestrator | 2026-04-11 05:32:09.558205 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:32:09.558214 | orchestrator | Saturday 11 April 2026 05:31:56 +0000 (0:00:00.824) 0:21:52.980 ******** 2026-04-11 05:32:09.558222 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558231 | orchestrator | 2026-04-11 05:32:09.558240 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:32:09.558249 | orchestrator | Saturday 11 April 2026 05:31:57 +0000 (0:00:00.773) 0:21:53.754 ******** 2026-04-11 05:32:09.558257 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558266 | orchestrator | 2026-04-11 05:32:09.558280 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:32:09.558289 | orchestrator | Saturday 11 April 2026 05:31:58 +0000 (0:00:00.810) 0:21:54.564 ******** 2026-04-11 05:32:09.558298 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558307 | orchestrator | 2026-04-11 05:32:09.558316 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:32:09.558324 | orchestrator | Saturday 11 April 2026 05:31:59 +0000 (0:00:00.793) 0:21:55.357 ******** 2026-04-11 05:32:09.558333 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558342 | orchestrator | 2026-04-11 05:32:09.558351 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:32:09.558360 | orchestrator | Saturday 11 April 2026 05:31:59 +0000 (0:00:00.779) 0:21:56.137 ******** 2026-04-11 05:32:09.558368 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558377 | orchestrator | 2026-04-11 05:32:09.558386 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:32:09.558394 | orchestrator | Saturday 11 April 2026 05:32:00 +0000 (0:00:00.816) 0:21:56.954 ******** 2026-04-11 05:32:09.558403 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558412 | orchestrator | 2026-04-11 05:32:09.558421 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:32:09.558430 | orchestrator | Saturday 11 April 2026 05:32:01 +0000 (0:00:00.798) 0:21:57.753 ******** 2026-04-11 05:32:09.558438 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558447 | orchestrator | 2026-04-11 05:32:09.558456 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:32:09.558465 | orchestrator | Saturday 11 April 2026 05:32:02 +0000 (0:00:00.782) 0:21:58.535 ******** 2026-04-11 05:32:09.558474 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558482 | orchestrator | 2026-04-11 05:32:09.558491 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:32:09.558500 | orchestrator | Saturday 11 April 2026 05:32:03 +0000 (0:00:00.831) 0:21:59.366 ******** 2026-04-11 05:32:09.558508 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558517 | orchestrator | 2026-04-11 05:32:09.558526 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:32:09.558535 | orchestrator | Saturday 11 April 2026 05:32:03 +0000 (0:00:00.796) 0:22:00.163 ******** 2026-04-11 05:32:09.558544 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558552 | orchestrator | 2026-04-11 05:32:09.558561 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:32:09.558570 | orchestrator | Saturday 11 April 2026 05:32:04 +0000 (0:00:00.771) 0:22:00.935 ******** 2026-04-11 05:32:09.558579 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558588 | orchestrator | 2026-04-11 05:32:09.558596 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:32:09.558611 | orchestrator | Saturday 11 April 2026 05:32:05 +0000 (0:00:00.903) 0:22:01.839 ******** 2026-04-11 05:32:09.558620 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558629 | orchestrator | 2026-04-11 05:32:09.558638 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:32:09.558646 | orchestrator | Saturday 11 April 2026 05:32:06 +0000 (0:00:00.784) 0:22:02.623 ******** 2026-04-11 05:32:09.558655 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558664 | orchestrator | 2026-04-11 05:32:09.558672 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:32:09.558681 | orchestrator | Saturday 11 April 2026 05:32:07 +0000 (0:00:00.783) 0:22:03.407 ******** 2026-04-11 05:32:09.558690 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558699 | orchestrator | 2026-04-11 05:32:09.558707 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:32:09.558716 | orchestrator | Saturday 11 April 2026 05:32:07 +0000 (0:00:00.781) 0:22:04.189 ******** 2026-04-11 05:32:09.558725 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558733 | orchestrator | 2026-04-11 05:32:09.558742 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:32:09.558751 | orchestrator | Saturday 11 April 2026 05:32:08 +0000 (0:00:00.807) 0:22:04.996 ******** 2026-04-11 05:32:09.558759 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:09.558768 | orchestrator | 2026-04-11 05:32:09.558777 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:32:09.558794 | orchestrator | Saturday 11 April 2026 05:32:09 +0000 (0:00:00.765) 0:22:05.762 ******** 2026-04-11 05:32:47.036669 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.036771 | orchestrator | 2026-04-11 05:32:47.036783 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:32:47.036792 | orchestrator | Saturday 11 April 2026 05:32:10 +0000 (0:00:00.774) 0:22:06.536 ******** 2026-04-11 05:32:47.036798 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.036805 | orchestrator | 2026-04-11 05:32:47.036812 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:32:47.036820 | orchestrator | Saturday 11 April 2026 05:32:11 +0000 (0:00:00.852) 0:22:07.389 ******** 2026-04-11 05:32:47.036827 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.036834 | orchestrator | 2026-04-11 05:32:47.036841 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:32:47.036848 | orchestrator | Saturday 11 April 2026 05:32:11 +0000 (0:00:00.781) 0:22:08.170 ******** 2026-04-11 05:32:47.036855 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.036862 | orchestrator | 2026-04-11 05:32:47.036869 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:32:47.036876 | orchestrator | Saturday 11 April 2026 05:32:12 +0000 (0:00:00.749) 0:22:08.920 ******** 2026-04-11 05:32:47.036883 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.036890 | orchestrator | 2026-04-11 05:32:47.036897 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:32:47.036904 | orchestrator | Saturday 11 April 2026 05:32:13 +0000 (0:00:00.796) 0:22:09.717 ******** 2026-04-11 05:32:47.036926 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.036933 | orchestrator | 2026-04-11 05:32:47.036940 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:32:47.036947 | orchestrator | Saturday 11 April 2026 05:32:14 +0000 (0:00:00.810) 0:22:10.527 ******** 2026-04-11 05:32:47.036954 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.036960 | orchestrator | 2026-04-11 05:32:47.036967 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:32:47.036973 | orchestrator | Saturday 11 April 2026 05:32:15 +0000 (0:00:00.777) 0:22:11.305 ******** 2026-04-11 05:32:47.036980 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.036987 | orchestrator | 2026-04-11 05:32:47.037010 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:32:47.037018 | orchestrator | Saturday 11 April 2026 05:32:15 +0000 (0:00:00.813) 0:22:12.118 ******** 2026-04-11 05:32:47.037024 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037031 | orchestrator | 2026-04-11 05:32:47.037038 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:32:47.037044 | orchestrator | Saturday 11 April 2026 05:32:17 +0000 (0:00:01.249) 0:22:13.368 ******** 2026-04-11 05:32:47.037051 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037057 | orchestrator | 2026-04-11 05:32:47.037064 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:32:47.037071 | orchestrator | Saturday 11 April 2026 05:32:17 +0000 (0:00:00.796) 0:22:14.165 ******** 2026-04-11 05:32:47.037077 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037083 | orchestrator | 2026-04-11 05:32:47.037090 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:32:47.037096 | orchestrator | Saturday 11 April 2026 05:32:18 +0000 (0:00:00.889) 0:22:15.055 ******** 2026-04-11 05:32:47.037103 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037110 | orchestrator | 2026-04-11 05:32:47.037116 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:32:47.037123 | orchestrator | Saturday 11 April 2026 05:32:19 +0000 (0:00:00.793) 0:22:15.848 ******** 2026-04-11 05:32:47.037130 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037136 | orchestrator | 2026-04-11 05:32:47.037143 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:32:47.037167 | orchestrator | Saturday 11 April 2026 05:32:20 +0000 (0:00:00.792) 0:22:16.641 ******** 2026-04-11 05:32:47.037174 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037181 | orchestrator | 2026-04-11 05:32:47.037188 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:32:47.037194 | orchestrator | Saturday 11 April 2026 05:32:21 +0000 (0:00:00.837) 0:22:17.478 ******** 2026-04-11 05:32:47.037199 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037205 | orchestrator | 2026-04-11 05:32:47.037211 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:32:47.037218 | orchestrator | Saturday 11 April 2026 05:32:22 +0000 (0:00:00.812) 0:22:18.291 ******** 2026-04-11 05:32:47.037225 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037232 | orchestrator | 2026-04-11 05:32:47.037239 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:32:47.037247 | orchestrator | Saturday 11 April 2026 05:32:22 +0000 (0:00:00.767) 0:22:19.058 ******** 2026-04-11 05:32:47.037255 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037264 | orchestrator | 2026-04-11 05:32:47.037271 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:32:47.037278 | orchestrator | Saturday 11 April 2026 05:32:23 +0000 (0:00:00.796) 0:22:19.854 ******** 2026-04-11 05:32:47.037286 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:32:47.037293 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:32:47.037301 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:32:47.037309 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037318 | orchestrator | 2026-04-11 05:32:47.037325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:32:47.037332 | orchestrator | Saturday 11 April 2026 05:32:24 +0000 (0:00:01.064) 0:22:20.919 ******** 2026-04-11 05:32:47.037339 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:32:47.037347 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:32:47.037370 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:32:47.037377 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037385 | orchestrator | 2026-04-11 05:32:47.037399 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:32:47.037409 | orchestrator | Saturday 11 April 2026 05:32:25 +0000 (0:00:01.084) 0:22:22.004 ******** 2026-04-11 05:32:47.037416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:32:47.037424 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:32:47.037431 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:32:47.037438 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037446 | orchestrator | 2026-04-11 05:32:47.037456 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:32:47.037463 | orchestrator | Saturday 11 April 2026 05:32:27 +0000 (0:00:01.466) 0:22:23.470 ******** 2026-04-11 05:32:47.037469 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037477 | orchestrator | 2026-04-11 05:32:47.037484 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:32:47.037491 | orchestrator | Saturday 11 April 2026 05:32:28 +0000 (0:00:00.805) 0:22:24.276 ******** 2026-04-11 05:32:47.037499 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-11 05:32:47.037508 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037515 | orchestrator | 2026-04-11 05:32:47.037523 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:32:47.037530 | orchestrator | Saturday 11 April 2026 05:32:29 +0000 (0:00:01.406) 0:22:25.682 ******** 2026-04-11 05:32:47.037542 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037552 | orchestrator | 2026-04-11 05:32:47.037560 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:32:47.037567 | orchestrator | Saturday 11 April 2026 05:32:30 +0000 (0:00:00.806) 0:22:26.488 ******** 2026-04-11 05:32:47.037574 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 05:32:47.037581 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 05:32:47.037588 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 05:32:47.037595 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037601 | orchestrator | 2026-04-11 05:32:47.037608 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-11 05:32:47.037615 | orchestrator | Saturday 11 April 2026 05:32:31 +0000 (0:00:01.060) 0:22:27.549 ******** 2026-04-11 05:32:47.037622 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037628 | orchestrator | 2026-04-11 05:32:47.037635 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-11 05:32:47.037642 | orchestrator | Saturday 11 April 2026 05:32:32 +0000 (0:00:00.783) 0:22:28.333 ******** 2026-04-11 05:32:47.037649 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037656 | orchestrator | 2026-04-11 05:32:47.037662 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-11 05:32:47.037669 | orchestrator | Saturday 11 April 2026 05:32:32 +0000 (0:00:00.785) 0:22:29.119 ******** 2026-04-11 05:32:47.037675 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037682 | orchestrator | 2026-04-11 05:32:47.037689 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-11 05:32:47.037696 | orchestrator | Saturday 11 April 2026 05:32:33 +0000 (0:00:00.770) 0:22:29.889 ******** 2026-04-11 05:32:47.037703 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:32:47.037709 | orchestrator | 2026-04-11 05:32:47.037716 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-11 05:32:47.037723 | orchestrator | 2026-04-11 05:32:47.037730 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-11 05:32:47.037736 | orchestrator | Saturday 11 April 2026 05:32:35 +0000 (0:00:01.338) 0:22:31.227 ******** 2026-04-11 05:32:47.037743 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:32:47.037749 | orchestrator | 2026-04-11 05:32:47.037756 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-11 05:32:47.037763 | orchestrator | Saturday 11 April 2026 05:32:37 +0000 (0:00:02.896) 0:22:34.124 ******** 2026-04-11 05:32:47.037774 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:32:47.037781 | orchestrator | 2026-04-11 05:32:47.037788 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:32:47.037794 | orchestrator | Saturday 11 April 2026 05:32:40 +0000 (0:00:02.393) 0:22:36.517 ******** 2026-04-11 05:32:47.037801 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-11 05:32:47.037808 | orchestrator | 2026-04-11 05:32:47.037815 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:32:47.037821 | orchestrator | Saturday 11 April 2026 05:32:41 +0000 (0:00:01.285) 0:22:37.802 ******** 2026-04-11 05:32:47.037828 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:32:47.037835 | orchestrator | 2026-04-11 05:32:47.037841 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:32:47.037848 | orchestrator | Saturday 11 April 2026 05:32:43 +0000 (0:00:01.537) 0:22:39.340 ******** 2026-04-11 05:32:47.037854 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:32:47.037861 | orchestrator | 2026-04-11 05:32:47.037868 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:32:47.037874 | orchestrator | Saturday 11 April 2026 05:32:44 +0000 (0:00:01.145) 0:22:40.485 ******** 2026-04-11 05:32:47.037881 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:32:47.037888 | orchestrator | 2026-04-11 05:32:47.037894 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:32:47.037901 | orchestrator | Saturday 11 April 2026 05:32:45 +0000 (0:00:01.470) 0:22:41.956 ******** 2026-04-11 05:32:47.037908 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:32:47.037914 | orchestrator | 2026-04-11 05:32:47.037921 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:32:47.037928 | orchestrator | Saturday 11 April 2026 05:32:46 +0000 (0:00:01.127) 0:22:43.083 ******** 2026-04-11 05:32:47.037935 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:32:47.037942 | orchestrator | 2026-04-11 05:32:47.037953 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:33:12.207271 | orchestrator | Saturday 11 April 2026 05:32:48 +0000 (0:00:01.128) 0:22:44.212 ******** 2026-04-11 05:33:12.207390 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:12.207407 | orchestrator | 2026-04-11 05:33:12.207420 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:33:12.207433 | orchestrator | Saturday 11 April 2026 05:32:49 +0000 (0:00:01.166) 0:22:45.379 ******** 2026-04-11 05:33:12.207444 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:12.207457 | orchestrator | 2026-04-11 05:33:12.207468 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:33:12.207480 | orchestrator | Saturday 11 April 2026 05:32:50 +0000 (0:00:01.117) 0:22:46.496 ******** 2026-04-11 05:33:12.207490 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:12.207501 | orchestrator | 2026-04-11 05:33:12.207512 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:33:12.207523 | orchestrator | Saturday 11 April 2026 05:32:51 +0000 (0:00:01.134) 0:22:47.631 ******** 2026-04-11 05:33:12.207534 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:33:12.207545 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:33:12.207556 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:33:12.207567 | orchestrator | 2026-04-11 05:33:12.207578 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:33:12.207589 | orchestrator | Saturday 11 April 2026 05:32:53 +0000 (0:00:02.019) 0:22:49.650 ******** 2026-04-11 05:33:12.207617 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:12.207629 | orchestrator | 2026-04-11 05:33:12.207640 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:33:12.207651 | orchestrator | Saturday 11 April 2026 05:32:54 +0000 (0:00:01.285) 0:22:50.936 ******** 2026-04-11 05:33:12.207682 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:33:12.207694 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:33:12.207705 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:33:12.207716 | orchestrator | 2026-04-11 05:33:12.207727 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:33:12.207740 | orchestrator | Saturday 11 April 2026 05:32:57 +0000 (0:00:03.258) 0:22:54.194 ******** 2026-04-11 05:33:12.207753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 05:33:12.207765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 05:33:12.207778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 05:33:12.207790 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:12.207803 | orchestrator | 2026-04-11 05:33:12.207815 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:33:12.207828 | orchestrator | Saturday 11 April 2026 05:32:59 +0000 (0:00:01.858) 0:22:56.053 ******** 2026-04-11 05:33:12.207842 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:33:12.207859 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:33:12.207871 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:33:12.207884 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:12.207897 | orchestrator | 2026-04-11 05:33:12.207911 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:33:12.207924 | orchestrator | Saturday 11 April 2026 05:33:01 +0000 (0:00:01.650) 0:22:57.704 ******** 2026-04-11 05:33:12.207939 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:12.207954 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:12.207985 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:12.207999 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:12.208012 | orchestrator | 2026-04-11 05:33:12.208024 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:33:12.208037 | orchestrator | Saturday 11 April 2026 05:33:02 +0000 (0:00:01.199) 0:22:58.903 ******** 2026-04-11 05:33:12.208058 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:32:55.604735', 'end': '2026-04-11 05:32:55.654309', 'delta': '0:00:00.049574', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:33:12.208083 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:32:56.171269', 'end': '2026-04-11 05:32:56.198804', 'delta': '0:00:00.027535', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:33:12.208096 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:32:56.752766', 'end': '2026-04-11 05:32:56.797589', 'delta': '0:00:00.044823', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:33:12.208107 | orchestrator | 2026-04-11 05:33:12.208118 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:33:12.208129 | orchestrator | Saturday 11 April 2026 05:33:03 +0000 (0:00:01.212) 0:23:00.115 ******** 2026-04-11 05:33:12.208140 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:12.208151 | orchestrator | 2026-04-11 05:33:12.208204 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:33:12.208218 | orchestrator | Saturday 11 April 2026 05:33:05 +0000 (0:00:01.335) 0:23:01.450 ******** 2026-04-11 05:33:12.208228 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:12.208239 | orchestrator | 2026-04-11 05:33:12.208250 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:33:12.208260 | orchestrator | Saturday 11 April 2026 05:33:06 +0000 (0:00:01.314) 0:23:02.765 ******** 2026-04-11 05:33:12.208271 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:12.208282 | orchestrator | 2026-04-11 05:33:12.208292 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:33:12.208303 | orchestrator | Saturday 11 April 2026 05:33:07 +0000 (0:00:01.138) 0:23:03.903 ******** 2026-04-11 05:33:12.208314 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:12.208325 | orchestrator | 2026-04-11 05:33:12.208335 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:33:12.208346 | orchestrator | Saturday 11 April 2026 05:33:09 +0000 (0:00:01.925) 0:23:05.829 ******** 2026-04-11 05:33:12.208356 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:12.208368 | orchestrator | 2026-04-11 05:33:12.208378 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:33:12.208389 | orchestrator | Saturday 11 April 2026 05:33:10 +0000 (0:00:01.155) 0:23:06.985 ******** 2026-04-11 05:33:12.208407 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:12.208418 | orchestrator | 2026-04-11 05:33:12.208428 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:33:12.208439 | orchestrator | Saturday 11 April 2026 05:33:11 +0000 (0:00:01.183) 0:23:08.169 ******** 2026-04-11 05:33:12.208450 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:12.208461 | orchestrator | 2026-04-11 05:33:12.208479 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:33:22.729931 | orchestrator | Saturday 11 April 2026 05:33:13 +0000 (0:00:01.265) 0:23:09.435 ******** 2026-04-11 05:33:22.730106 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:22.730126 | orchestrator | 2026-04-11 05:33:22.730139 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:33:22.730150 | orchestrator | Saturday 11 April 2026 05:33:14 +0000 (0:00:01.161) 0:23:10.596 ******** 2026-04-11 05:33:22.730218 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:22.730242 | orchestrator | 2026-04-11 05:33:22.730261 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:33:22.730279 | orchestrator | Saturday 11 April 2026 05:33:15 +0000 (0:00:01.161) 0:23:11.758 ******** 2026-04-11 05:33:22.730294 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:22.730305 | orchestrator | 2026-04-11 05:33:22.730316 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:33:22.730327 | orchestrator | Saturday 11 April 2026 05:33:16 +0000 (0:00:01.116) 0:23:12.875 ******** 2026-04-11 05:33:22.730338 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:22.730349 | orchestrator | 2026-04-11 05:33:22.730360 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:33:22.730372 | orchestrator | Saturday 11 April 2026 05:33:17 +0000 (0:00:01.189) 0:23:14.065 ******** 2026-04-11 05:33:22.730382 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:22.730393 | orchestrator | 2026-04-11 05:33:22.730404 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:33:22.730415 | orchestrator | Saturday 11 April 2026 05:33:19 +0000 (0:00:01.168) 0:23:15.233 ******** 2026-04-11 05:33:22.730442 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:22.730453 | orchestrator | 2026-04-11 05:33:22.730467 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:33:22.730480 | orchestrator | Saturday 11 April 2026 05:33:20 +0000 (0:00:01.185) 0:23:16.419 ******** 2026-04-11 05:33:22.730493 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:22.730507 | orchestrator | 2026-04-11 05:33:22.730519 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:33:22.730532 | orchestrator | Saturday 11 April 2026 05:33:21 +0000 (0:00:01.125) 0:23:17.544 ******** 2026-04-11 05:33:22.730548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:33:22.730566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:33:22.730579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:33:22.730617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:33:22.730634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:33:22.730665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:33:22.730679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:33:22.730704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4dd7cb49', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:33:22.730731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:33:22.730744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:33:22.730757 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:22.730771 | orchestrator | 2026-04-11 05:33:22.730802 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:33:22.730816 | orchestrator | Saturday 11 April 2026 05:33:22 +0000 (0:00:01.314) 0:23:18.859 ******** 2026-04-11 05:33:22.730850 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.954952 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955060 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955078 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955109 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955121 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955133 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955204 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4dd7cb49', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955231 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955243 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:33:27.955256 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:33:27.955269 | orchestrator | 2026-04-11 05:33:27.955281 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:33:27.955293 | orchestrator | Saturday 11 April 2026 05:33:23 +0000 (0:00:01.224) 0:23:20.084 ******** 2026-04-11 05:33:27.955304 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:27.955316 | orchestrator | 2026-04-11 05:33:27.955327 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:33:27.955338 | orchestrator | Saturday 11 April 2026 05:33:25 +0000 (0:00:01.506) 0:23:21.590 ******** 2026-04-11 05:33:27.955348 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:27.955359 | orchestrator | 2026-04-11 05:33:27.955370 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:33:27.955380 | orchestrator | Saturday 11 April 2026 05:33:26 +0000 (0:00:01.116) 0:23:22.706 ******** 2026-04-11 05:33:27.955391 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:33:27.955402 | orchestrator | 2026-04-11 05:33:27.955413 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:33:27.955431 | orchestrator | Saturday 11 April 2026 05:33:27 +0000 (0:00:01.458) 0:23:24.165 ******** 2026-04-11 05:34:10.512722 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.512856 | orchestrator | 2026-04-11 05:34:10.512874 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:34:10.512887 | orchestrator | Saturday 11 April 2026 05:33:29 +0000 (0:00:01.168) 0:23:25.333 ******** 2026-04-11 05:34:10.512898 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.512911 | orchestrator | 2026-04-11 05:34:10.512922 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:34:10.512933 | orchestrator | Saturday 11 April 2026 05:33:30 +0000 (0:00:01.220) 0:23:26.554 ******** 2026-04-11 05:34:10.512944 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.512955 | orchestrator | 2026-04-11 05:34:10.512966 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:34:10.512977 | orchestrator | Saturday 11 April 2026 05:33:31 +0000 (0:00:01.150) 0:23:27.704 ******** 2026-04-11 05:34:10.512988 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:34:10.512999 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 05:34:10.513010 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 05:34:10.513021 | orchestrator | 2026-04-11 05:34:10.513046 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:34:10.513118 | orchestrator | Saturday 11 April 2026 05:33:33 +0000 (0:00:02.036) 0:23:29.741 ******** 2026-04-11 05:34:10.513131 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 05:34:10.513142 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 05:34:10.513153 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 05:34:10.513164 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.513174 | orchestrator | 2026-04-11 05:34:10.513185 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:34:10.513196 | orchestrator | Saturday 11 April 2026 05:33:34 +0000 (0:00:01.139) 0:23:30.880 ******** 2026-04-11 05:34:10.513207 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.513217 | orchestrator | 2026-04-11 05:34:10.513228 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:34:10.513239 | orchestrator | Saturday 11 April 2026 05:33:35 +0000 (0:00:01.105) 0:23:31.986 ******** 2026-04-11 05:34:10.513251 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:34:10.513264 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:34:10.513277 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:34:10.513290 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:34:10.513301 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:34:10.513316 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:34:10.513328 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:34:10.513341 | orchestrator | 2026-04-11 05:34:10.513353 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:34:10.513366 | orchestrator | Saturday 11 April 2026 05:33:37 +0000 (0:00:01.720) 0:23:33.706 ******** 2026-04-11 05:34:10.513378 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:34:10.513391 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:34:10.513404 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:34:10.513416 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:34:10.513429 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:34:10.513441 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:34:10.513454 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:34:10.513466 | orchestrator | 2026-04-11 05:34:10.513479 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:34:10.513491 | orchestrator | Saturday 11 April 2026 05:33:39 +0000 (0:00:02.311) 0:23:36.018 ******** 2026-04-11 05:34:10.513504 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-11 05:34:10.513518 | orchestrator | 2026-04-11 05:34:10.513531 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:34:10.513544 | orchestrator | Saturday 11 April 2026 05:33:40 +0000 (0:00:01.105) 0:23:37.123 ******** 2026-04-11 05:34:10.513556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-11 05:34:10.513568 | orchestrator | 2026-04-11 05:34:10.513581 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:34:10.513594 | orchestrator | Saturday 11 April 2026 05:33:42 +0000 (0:00:01.126) 0:23:38.250 ******** 2026-04-11 05:34:10.513606 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:10.513618 | orchestrator | 2026-04-11 05:34:10.513628 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:34:10.513647 | orchestrator | Saturday 11 April 2026 05:33:43 +0000 (0:00:01.602) 0:23:39.853 ******** 2026-04-11 05:34:10.513658 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.513668 | orchestrator | 2026-04-11 05:34:10.513679 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:34:10.513690 | orchestrator | Saturday 11 April 2026 05:33:44 +0000 (0:00:01.158) 0:23:41.012 ******** 2026-04-11 05:34:10.513701 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.513712 | orchestrator | 2026-04-11 05:34:10.513741 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:34:10.513753 | orchestrator | Saturday 11 April 2026 05:33:45 +0000 (0:00:01.110) 0:23:42.122 ******** 2026-04-11 05:34:10.513764 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.513775 | orchestrator | 2026-04-11 05:34:10.513785 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:34:10.513796 | orchestrator | Saturday 11 April 2026 05:33:47 +0000 (0:00:01.135) 0:23:43.258 ******** 2026-04-11 05:34:10.513807 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:10.513818 | orchestrator | 2026-04-11 05:34:10.513829 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:34:10.513840 | orchestrator | Saturday 11 April 2026 05:33:48 +0000 (0:00:01.519) 0:23:44.777 ******** 2026-04-11 05:34:10.513851 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.513862 | orchestrator | 2026-04-11 05:34:10.513872 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:34:10.513883 | orchestrator | Saturday 11 April 2026 05:33:49 +0000 (0:00:01.108) 0:23:45.886 ******** 2026-04-11 05:34:10.513894 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.513905 | orchestrator | 2026-04-11 05:34:10.513916 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:34:10.513932 | orchestrator | Saturday 11 April 2026 05:33:50 +0000 (0:00:01.184) 0:23:47.071 ******** 2026-04-11 05:34:10.513943 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:10.513954 | orchestrator | 2026-04-11 05:34:10.513965 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:34:10.513976 | orchestrator | Saturday 11 April 2026 05:33:52 +0000 (0:00:01.570) 0:23:48.641 ******** 2026-04-11 05:34:10.513987 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:10.513998 | orchestrator | 2026-04-11 05:34:10.514009 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:34:10.514093 | orchestrator | Saturday 11 April 2026 05:33:53 +0000 (0:00:01.540) 0:23:50.182 ******** 2026-04-11 05:34:10.514107 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514118 | orchestrator | 2026-04-11 05:34:10.514129 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:34:10.514140 | orchestrator | Saturday 11 April 2026 05:33:55 +0000 (0:00:01.110) 0:23:51.293 ******** 2026-04-11 05:34:10.514151 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:10.514162 | orchestrator | 2026-04-11 05:34:10.514173 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:34:10.514184 | orchestrator | Saturday 11 April 2026 05:33:56 +0000 (0:00:01.235) 0:23:52.528 ******** 2026-04-11 05:34:10.514195 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514206 | orchestrator | 2026-04-11 05:34:10.514217 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:34:10.514228 | orchestrator | Saturday 11 April 2026 05:33:57 +0000 (0:00:01.142) 0:23:53.671 ******** 2026-04-11 05:34:10.514238 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514249 | orchestrator | 2026-04-11 05:34:10.514260 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:34:10.514271 | orchestrator | Saturday 11 April 2026 05:33:58 +0000 (0:00:01.212) 0:23:54.884 ******** 2026-04-11 05:34:10.514282 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514293 | orchestrator | 2026-04-11 05:34:10.514304 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:34:10.514322 | orchestrator | Saturday 11 April 2026 05:33:59 +0000 (0:00:01.140) 0:23:56.025 ******** 2026-04-11 05:34:10.514333 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514344 | orchestrator | 2026-04-11 05:34:10.514355 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:34:10.514366 | orchestrator | Saturday 11 April 2026 05:34:01 +0000 (0:00:01.292) 0:23:57.317 ******** 2026-04-11 05:34:10.514377 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514388 | orchestrator | 2026-04-11 05:34:10.514398 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:34:10.514410 | orchestrator | Saturday 11 April 2026 05:34:02 +0000 (0:00:01.132) 0:23:58.450 ******** 2026-04-11 05:34:10.514420 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:10.514432 | orchestrator | 2026-04-11 05:34:10.514442 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:34:10.514453 | orchestrator | Saturday 11 April 2026 05:34:03 +0000 (0:00:01.254) 0:23:59.704 ******** 2026-04-11 05:34:10.514464 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:10.514475 | orchestrator | 2026-04-11 05:34:10.514486 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:34:10.514497 | orchestrator | Saturday 11 April 2026 05:34:04 +0000 (0:00:01.238) 0:24:00.943 ******** 2026-04-11 05:34:10.514508 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:10.514519 | orchestrator | 2026-04-11 05:34:10.514530 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:34:10.514541 | orchestrator | Saturday 11 April 2026 05:34:05 +0000 (0:00:01.160) 0:24:02.104 ******** 2026-04-11 05:34:10.514552 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514563 | orchestrator | 2026-04-11 05:34:10.514573 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:34:10.514584 | orchestrator | Saturday 11 April 2026 05:34:07 +0000 (0:00:01.110) 0:24:03.214 ******** 2026-04-11 05:34:10.514595 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514606 | orchestrator | 2026-04-11 05:34:10.514617 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:34:10.514628 | orchestrator | Saturday 11 April 2026 05:34:08 +0000 (0:00:01.139) 0:24:04.354 ******** 2026-04-11 05:34:10.514639 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514650 | orchestrator | 2026-04-11 05:34:10.514661 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:34:10.514672 | orchestrator | Saturday 11 April 2026 05:34:09 +0000 (0:00:01.151) 0:24:05.505 ******** 2026-04-11 05:34:10.514682 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:10.514693 | orchestrator | 2026-04-11 05:34:10.514704 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:34:10.514715 | orchestrator | Saturday 11 April 2026 05:34:10 +0000 (0:00:01.163) 0:24:06.669 ******** 2026-04-11 05:34:10.514735 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.676305 | orchestrator | 2026-04-11 05:34:59.676426 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:34:59.676445 | orchestrator | Saturday 11 April 2026 05:34:11 +0000 (0:00:01.145) 0:24:07.815 ******** 2026-04-11 05:34:59.676458 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.676470 | orchestrator | 2026-04-11 05:34:59.676482 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:34:59.676494 | orchestrator | Saturday 11 April 2026 05:34:12 +0000 (0:00:01.177) 0:24:08.992 ******** 2026-04-11 05:34:59.676505 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.676516 | orchestrator | 2026-04-11 05:34:59.676528 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:34:59.676539 | orchestrator | Saturday 11 April 2026 05:34:13 +0000 (0:00:01.182) 0:24:10.174 ******** 2026-04-11 05:34:59.676550 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.676561 | orchestrator | 2026-04-11 05:34:59.676572 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:34:59.676606 | orchestrator | Saturday 11 April 2026 05:34:15 +0000 (0:00:01.155) 0:24:11.329 ******** 2026-04-11 05:34:59.676618 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.676628 | orchestrator | 2026-04-11 05:34:59.676654 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:34:59.676665 | orchestrator | Saturday 11 April 2026 05:34:16 +0000 (0:00:01.130) 0:24:12.460 ******** 2026-04-11 05:34:59.676676 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.676687 | orchestrator | 2026-04-11 05:34:59.676698 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:34:59.676708 | orchestrator | Saturday 11 April 2026 05:34:17 +0000 (0:00:01.103) 0:24:13.563 ******** 2026-04-11 05:34:59.676720 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.676731 | orchestrator | 2026-04-11 05:34:59.676742 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:34:59.676752 | orchestrator | Saturday 11 April 2026 05:34:18 +0000 (0:00:01.133) 0:24:14.697 ******** 2026-04-11 05:34:59.676763 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.676774 | orchestrator | 2026-04-11 05:34:59.676784 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:34:59.676795 | orchestrator | Saturday 11 April 2026 05:34:19 +0000 (0:00:01.111) 0:24:15.809 ******** 2026-04-11 05:34:59.676806 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:59.676817 | orchestrator | 2026-04-11 05:34:59.676828 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:34:59.676839 | orchestrator | Saturday 11 April 2026 05:34:21 +0000 (0:00:01.987) 0:24:17.796 ******** 2026-04-11 05:34:59.676850 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:59.676860 | orchestrator | 2026-04-11 05:34:59.676871 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:34:59.676882 | orchestrator | Saturday 11 April 2026 05:34:23 +0000 (0:00:02.376) 0:24:20.173 ******** 2026-04-11 05:34:59.676893 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-11 05:34:59.676909 | orchestrator | 2026-04-11 05:34:59.676928 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:34:59.676946 | orchestrator | Saturday 11 April 2026 05:34:25 +0000 (0:00:01.170) 0:24:21.343 ******** 2026-04-11 05:34:59.676966 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677017 | orchestrator | 2026-04-11 05:34:59.677036 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:34:59.677054 | orchestrator | Saturday 11 April 2026 05:34:26 +0000 (0:00:01.134) 0:24:22.478 ******** 2026-04-11 05:34:59.677065 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677076 | orchestrator | 2026-04-11 05:34:59.677087 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:34:59.677097 | orchestrator | Saturday 11 April 2026 05:34:27 +0000 (0:00:01.152) 0:24:23.630 ******** 2026-04-11 05:34:59.677108 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:34:59.677119 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:34:59.677129 | orchestrator | 2026-04-11 05:34:59.677140 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:34:59.677150 | orchestrator | Saturday 11 April 2026 05:34:29 +0000 (0:00:01.868) 0:24:25.499 ******** 2026-04-11 05:34:59.677161 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:59.677172 | orchestrator | 2026-04-11 05:34:59.677182 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:34:59.677193 | orchestrator | Saturday 11 April 2026 05:34:30 +0000 (0:00:01.502) 0:24:27.002 ******** 2026-04-11 05:34:59.677204 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677214 | orchestrator | 2026-04-11 05:34:59.677225 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:34:59.677235 | orchestrator | Saturday 11 April 2026 05:34:31 +0000 (0:00:01.179) 0:24:28.181 ******** 2026-04-11 05:34:59.677256 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677267 | orchestrator | 2026-04-11 05:34:59.677277 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:34:59.677288 | orchestrator | Saturday 11 April 2026 05:34:33 +0000 (0:00:01.166) 0:24:29.348 ******** 2026-04-11 05:34:59.677299 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677310 | orchestrator | 2026-04-11 05:34:59.677320 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:34:59.677331 | orchestrator | Saturday 11 April 2026 05:34:34 +0000 (0:00:01.111) 0:24:30.459 ******** 2026-04-11 05:34:59.677341 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-11 05:34:59.677352 | orchestrator | 2026-04-11 05:34:59.677362 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:34:59.677373 | orchestrator | Saturday 11 April 2026 05:34:35 +0000 (0:00:01.229) 0:24:31.688 ******** 2026-04-11 05:34:59.677384 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:59.677395 | orchestrator | 2026-04-11 05:34:59.677425 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:34:59.677437 | orchestrator | Saturday 11 April 2026 05:34:37 +0000 (0:00:01.703) 0:24:33.391 ******** 2026-04-11 05:34:59.677527 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:34:59.677539 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:34:59.677550 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:34:59.677560 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677572 | orchestrator | 2026-04-11 05:34:59.677582 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:34:59.677593 | orchestrator | Saturday 11 April 2026 05:34:38 +0000 (0:00:01.223) 0:24:34.614 ******** 2026-04-11 05:34:59.677604 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677614 | orchestrator | 2026-04-11 05:34:59.677625 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:34:59.677636 | orchestrator | Saturday 11 April 2026 05:34:39 +0000 (0:00:01.134) 0:24:35.749 ******** 2026-04-11 05:34:59.677654 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677665 | orchestrator | 2026-04-11 05:34:59.677676 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:34:59.677686 | orchestrator | Saturday 11 April 2026 05:34:40 +0000 (0:00:01.174) 0:24:36.923 ******** 2026-04-11 05:34:59.677697 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677708 | orchestrator | 2026-04-11 05:34:59.677718 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:34:59.677729 | orchestrator | Saturday 11 April 2026 05:34:41 +0000 (0:00:01.138) 0:24:38.062 ******** 2026-04-11 05:34:59.677740 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677750 | orchestrator | 2026-04-11 05:34:59.677761 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:34:59.677772 | orchestrator | Saturday 11 April 2026 05:34:42 +0000 (0:00:01.138) 0:24:39.201 ******** 2026-04-11 05:34:59.677782 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.677793 | orchestrator | 2026-04-11 05:34:59.677804 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:34:59.677814 | orchestrator | Saturday 11 April 2026 05:34:44 +0000 (0:00:01.176) 0:24:40.378 ******** 2026-04-11 05:34:59.677825 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:59.677836 | orchestrator | 2026-04-11 05:34:59.677847 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:34:59.677858 | orchestrator | Saturday 11 April 2026 05:34:46 +0000 (0:00:02.555) 0:24:42.933 ******** 2026-04-11 05:34:59.677868 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:59.677879 | orchestrator | 2026-04-11 05:34:59.677890 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:34:59.677908 | orchestrator | Saturday 11 April 2026 05:34:47 +0000 (0:00:01.127) 0:24:44.061 ******** 2026-04-11 05:34:59.677920 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-11 05:34:59.677931 | orchestrator | 2026-04-11 05:34:59.677949 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:34:59.677967 | orchestrator | Saturday 11 April 2026 05:34:48 +0000 (0:00:01.106) 0:24:45.167 ******** 2026-04-11 05:34:59.678011 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.678096 | orchestrator | 2026-04-11 05:34:59.678108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:34:59.678118 | orchestrator | Saturday 11 April 2026 05:34:50 +0000 (0:00:01.159) 0:24:46.327 ******** 2026-04-11 05:34:59.678129 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.678140 | orchestrator | 2026-04-11 05:34:59.678151 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:34:59.678162 | orchestrator | Saturday 11 April 2026 05:34:51 +0000 (0:00:01.140) 0:24:47.467 ******** 2026-04-11 05:34:59.678173 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.678184 | orchestrator | 2026-04-11 05:34:59.678195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:34:59.678205 | orchestrator | Saturday 11 April 2026 05:34:52 +0000 (0:00:01.177) 0:24:48.645 ******** 2026-04-11 05:34:59.678216 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.678227 | orchestrator | 2026-04-11 05:34:59.678237 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:34:59.678248 | orchestrator | Saturday 11 April 2026 05:34:53 +0000 (0:00:01.194) 0:24:49.840 ******** 2026-04-11 05:34:59.678259 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.678270 | orchestrator | 2026-04-11 05:34:59.678281 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:34:59.678292 | orchestrator | Saturday 11 April 2026 05:34:54 +0000 (0:00:01.120) 0:24:50.961 ******** 2026-04-11 05:34:59.678303 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.678314 | orchestrator | 2026-04-11 05:34:59.678324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:34:59.678335 | orchestrator | Saturday 11 April 2026 05:34:55 +0000 (0:00:01.150) 0:24:52.112 ******** 2026-04-11 05:34:59.678346 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.678357 | orchestrator | 2026-04-11 05:34:59.678367 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:34:59.678378 | orchestrator | Saturday 11 April 2026 05:34:57 +0000 (0:00:01.253) 0:24:53.365 ******** 2026-04-11 05:34:59.678389 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:34:59.678400 | orchestrator | 2026-04-11 05:34:59.678411 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:34:59.678422 | orchestrator | Saturday 11 April 2026 05:34:58 +0000 (0:00:01.180) 0:24:54.545 ******** 2026-04-11 05:34:59.678432 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:34:59.678443 | orchestrator | 2026-04-11 05:34:59.678454 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:34:59.678465 | orchestrator | Saturday 11 April 2026 05:34:59 +0000 (0:00:01.182) 0:24:55.728 ******** 2026-04-11 05:34:59.678487 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-11 05:35:43.457275 | orchestrator | 2026-04-11 05:35:43.457392 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:35:43.457409 | orchestrator | Saturday 11 April 2026 05:35:00 +0000 (0:00:01.118) 0:24:56.847 ******** 2026-04-11 05:35:43.457422 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-11 05:35:43.457434 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-11 05:35:43.457445 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-11 05:35:43.457456 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-11 05:35:43.457489 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-11 05:35:43.457500 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-11 05:35:43.457511 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-11 05:35:43.457522 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:35:43.457533 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:35:43.457558 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:35:43.457569 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:35:43.457580 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:35:43.457590 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:35:43.457601 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:35:43.457612 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-11 05:35:43.457622 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-11 05:35:43.457633 | orchestrator | 2026-04-11 05:35:43.457644 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:35:43.457655 | orchestrator | Saturday 11 April 2026 05:35:07 +0000 (0:00:06.543) 0:25:03.391 ******** 2026-04-11 05:35:43.457666 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.457676 | orchestrator | 2026-04-11 05:35:43.457687 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:35:43.457698 | orchestrator | Saturday 11 April 2026 05:35:08 +0000 (0:00:01.102) 0:25:04.493 ******** 2026-04-11 05:35:43.457708 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.457719 | orchestrator | 2026-04-11 05:35:43.457730 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:35:43.457740 | orchestrator | Saturday 11 April 2026 05:35:09 +0000 (0:00:01.098) 0:25:05.591 ******** 2026-04-11 05:35:43.457751 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.457762 | orchestrator | 2026-04-11 05:35:43.457772 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:35:43.457783 | orchestrator | Saturday 11 April 2026 05:35:10 +0000 (0:00:01.110) 0:25:06.702 ******** 2026-04-11 05:35:43.457793 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.457805 | orchestrator | 2026-04-11 05:35:43.457818 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:35:43.457831 | orchestrator | Saturday 11 April 2026 05:35:11 +0000 (0:00:01.125) 0:25:07.828 ******** 2026-04-11 05:35:43.457844 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.457856 | orchestrator | 2026-04-11 05:35:43.457869 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:35:43.457882 | orchestrator | Saturday 11 April 2026 05:35:12 +0000 (0:00:01.145) 0:25:08.974 ******** 2026-04-11 05:35:43.457918 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.457931 | orchestrator | 2026-04-11 05:35:43.457944 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:35:43.457958 | orchestrator | Saturday 11 April 2026 05:35:13 +0000 (0:00:01.144) 0:25:10.119 ******** 2026-04-11 05:35:43.457970 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.457982 | orchestrator | 2026-04-11 05:35:43.457995 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:35:43.458007 | orchestrator | Saturday 11 April 2026 05:35:15 +0000 (0:00:01.143) 0:25:11.262 ******** 2026-04-11 05:35:43.458078 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458091 | orchestrator | 2026-04-11 05:35:43.458104 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:35:43.458116 | orchestrator | Saturday 11 April 2026 05:35:16 +0000 (0:00:01.115) 0:25:12.378 ******** 2026-04-11 05:35:43.458129 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458151 | orchestrator | 2026-04-11 05:35:43.458164 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:35:43.458174 | orchestrator | Saturday 11 April 2026 05:35:17 +0000 (0:00:01.094) 0:25:13.472 ******** 2026-04-11 05:35:43.458185 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458196 | orchestrator | 2026-04-11 05:35:43.458206 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:35:43.458217 | orchestrator | Saturday 11 April 2026 05:35:18 +0000 (0:00:01.152) 0:25:14.625 ******** 2026-04-11 05:35:43.458227 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458238 | orchestrator | 2026-04-11 05:35:43.458249 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:35:43.458259 | orchestrator | Saturday 11 April 2026 05:35:19 +0000 (0:00:01.188) 0:25:15.814 ******** 2026-04-11 05:35:43.458270 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458281 | orchestrator | 2026-04-11 05:35:43.458291 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:35:43.458302 | orchestrator | Saturday 11 April 2026 05:35:20 +0000 (0:00:01.118) 0:25:16.933 ******** 2026-04-11 05:35:43.458313 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458323 | orchestrator | 2026-04-11 05:35:43.458334 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:35:43.458345 | orchestrator | Saturday 11 April 2026 05:35:21 +0000 (0:00:01.199) 0:25:18.133 ******** 2026-04-11 05:35:43.458356 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458367 | orchestrator | 2026-04-11 05:35:43.458395 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:35:43.458406 | orchestrator | Saturday 11 April 2026 05:35:23 +0000 (0:00:01.152) 0:25:19.285 ******** 2026-04-11 05:35:43.458417 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458428 | orchestrator | 2026-04-11 05:35:43.458439 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:35:43.458450 | orchestrator | Saturday 11 April 2026 05:35:24 +0000 (0:00:01.207) 0:25:20.493 ******** 2026-04-11 05:35:43.458460 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458471 | orchestrator | 2026-04-11 05:35:43.458482 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:35:43.458493 | orchestrator | Saturday 11 April 2026 05:35:25 +0000 (0:00:01.143) 0:25:21.636 ******** 2026-04-11 05:35:43.458503 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458514 | orchestrator | 2026-04-11 05:35:43.458525 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:35:43.458543 | orchestrator | Saturday 11 April 2026 05:35:26 +0000 (0:00:01.106) 0:25:22.743 ******** 2026-04-11 05:35:43.458554 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458565 | orchestrator | 2026-04-11 05:35:43.458576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:35:43.458587 | orchestrator | Saturday 11 April 2026 05:35:27 +0000 (0:00:01.124) 0:25:23.868 ******** 2026-04-11 05:35:43.458598 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458608 | orchestrator | 2026-04-11 05:35:43.458619 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:35:43.458630 | orchestrator | Saturday 11 April 2026 05:35:28 +0000 (0:00:01.157) 0:25:25.026 ******** 2026-04-11 05:35:43.458641 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458651 | orchestrator | 2026-04-11 05:35:43.458662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:35:43.458673 | orchestrator | Saturday 11 April 2026 05:35:29 +0000 (0:00:01.142) 0:25:26.168 ******** 2026-04-11 05:35:43.458684 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458694 | orchestrator | 2026-04-11 05:35:43.458705 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:35:43.458716 | orchestrator | Saturday 11 April 2026 05:35:31 +0000 (0:00:01.137) 0:25:27.305 ******** 2026-04-11 05:35:43.458734 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:35:43.458745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:35:43.458756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:35:43.458767 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458778 | orchestrator | 2026-04-11 05:35:43.458788 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:35:43.458799 | orchestrator | Saturday 11 April 2026 05:35:32 +0000 (0:00:01.373) 0:25:28.678 ******** 2026-04-11 05:35:43.458810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:35:43.458821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:35:43.458832 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:35:43.458843 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458854 | orchestrator | 2026-04-11 05:35:43.458864 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:35:43.458875 | orchestrator | Saturday 11 April 2026 05:35:33 +0000 (0:00:01.481) 0:25:30.160 ******** 2026-04-11 05:35:43.458886 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-11 05:35:43.458927 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 05:35:43.458938 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 05:35:43.458949 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.458960 | orchestrator | 2026-04-11 05:35:43.458971 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:35:43.458981 | orchestrator | Saturday 11 April 2026 05:35:35 +0000 (0:00:01.401) 0:25:31.561 ******** 2026-04-11 05:35:43.458992 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.459003 | orchestrator | 2026-04-11 05:35:43.459014 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:35:43.459025 | orchestrator | Saturday 11 April 2026 05:35:36 +0000 (0:00:01.095) 0:25:32.657 ******** 2026-04-11 05:35:43.459035 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-11 05:35:43.459046 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:35:43.459057 | orchestrator | 2026-04-11 05:35:43.459068 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:35:43.459079 | orchestrator | Saturday 11 April 2026 05:35:37 +0000 (0:00:01.219) 0:25:33.876 ******** 2026-04-11 05:35:43.459090 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:35:43.459100 | orchestrator | 2026-04-11 05:35:43.459111 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:35:43.459122 | orchestrator | Saturday 11 April 2026 05:35:39 +0000 (0:00:02.208) 0:25:36.085 ******** 2026-04-11 05:35:43.459133 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 05:35:43.459144 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:35:43.459156 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:35:43.459166 | orchestrator | 2026-04-11 05:35:43.459177 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-11 05:35:43.459188 | orchestrator | Saturday 11 April 2026 05:35:41 +0000 (0:00:01.672) 0:25:37.757 ******** 2026-04-11 05:35:43.459198 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-04-11 05:35:43.459209 | orchestrator | 2026-04-11 05:35:43.459220 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-11 05:35:43.459231 | orchestrator | Saturday 11 April 2026 05:35:43 +0000 (0:00:01.511) 0:25:39.269 ******** 2026-04-11 05:35:43.459248 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:36:44.765207 | orchestrator | 2026-04-11 05:36:44.765329 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-11 05:36:44.765347 | orchestrator | Saturday 11 April 2026 05:35:44 +0000 (0:00:01.529) 0:25:40.799 ******** 2026-04-11 05:36:44.765384 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:36:44.765397 | orchestrator | 2026-04-11 05:36:44.765409 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-11 05:36:44.765420 | orchestrator | Saturday 11 April 2026 05:35:45 +0000 (0:00:01.123) 0:25:41.922 ******** 2026-04-11 05:36:44.765431 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 05:36:44.765442 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 05:36:44.765453 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 05:36:44.765463 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-11 05:36:44.765474 | orchestrator | 2026-04-11 05:36:44.765485 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-11 05:36:44.765496 | orchestrator | Saturday 11 April 2026 05:35:52 +0000 (0:00:07.080) 0:25:49.003 ******** 2026-04-11 05:36:44.765520 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:36:44.765533 | orchestrator | 2026-04-11 05:36:44.765544 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-11 05:36:44.765554 | orchestrator | Saturday 11 April 2026 05:35:53 +0000 (0:00:01.171) 0:25:50.174 ******** 2026-04-11 05:36:44.765565 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-11 05:36:44.765576 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 05:36:44.765587 | orchestrator | 2026-04-11 05:36:44.765598 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-11 05:36:44.765609 | orchestrator | Saturday 11 April 2026 05:35:57 +0000 (0:00:03.143) 0:25:53.317 ******** 2026-04-11 05:36:44.765620 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-11 05:36:44.765631 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 05:36:44.765641 | orchestrator | 2026-04-11 05:36:44.765652 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-11 05:36:44.765663 | orchestrator | Saturday 11 April 2026 05:35:59 +0000 (0:00:02.112) 0:25:55.429 ******** 2026-04-11 05:36:44.765673 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:36:44.765684 | orchestrator | 2026-04-11 05:36:44.765695 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-11 05:36:44.765706 | orchestrator | Saturday 11 April 2026 05:36:00 +0000 (0:00:01.537) 0:25:56.967 ******** 2026-04-11 05:36:44.765718 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:36:44.765731 | orchestrator | 2026-04-11 05:36:44.765743 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-11 05:36:44.765756 | orchestrator | Saturday 11 April 2026 05:36:01 +0000 (0:00:01.168) 0:25:58.135 ******** 2026-04-11 05:36:44.765769 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:36:44.765781 | orchestrator | 2026-04-11 05:36:44.765821 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-11 05:36:44.765835 | orchestrator | Saturday 11 April 2026 05:36:03 +0000 (0:00:01.160) 0:25:59.295 ******** 2026-04-11 05:36:44.765847 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-04-11 05:36:44.765861 | orchestrator | 2026-04-11 05:36:44.765873 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-11 05:36:44.765885 | orchestrator | Saturday 11 April 2026 05:36:04 +0000 (0:00:01.516) 0:26:00.811 ******** 2026-04-11 05:36:44.765897 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:36:44.765910 | orchestrator | 2026-04-11 05:36:44.765922 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-11 05:36:44.765934 | orchestrator | Saturday 11 April 2026 05:36:05 +0000 (0:00:01.149) 0:26:01.961 ******** 2026-04-11 05:36:44.765947 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:36:44.765959 | orchestrator | 2026-04-11 05:36:44.765971 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-11 05:36:44.765984 | orchestrator | Saturday 11 April 2026 05:36:06 +0000 (0:00:01.153) 0:26:03.114 ******** 2026-04-11 05:36:44.765997 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-04-11 05:36:44.766068 | orchestrator | 2026-04-11 05:36:44.766082 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-11 05:36:44.766095 | orchestrator | Saturday 11 April 2026 05:36:08 +0000 (0:00:01.548) 0:26:04.663 ******** 2026-04-11 05:36:44.766107 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:36:44.766127 | orchestrator | 2026-04-11 05:36:44.766147 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-11 05:36:44.766167 | orchestrator | Saturday 11 April 2026 05:36:10 +0000 (0:00:02.007) 0:26:06.671 ******** 2026-04-11 05:36:44.766187 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:36:44.766207 | orchestrator | 2026-04-11 05:36:44.766218 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-11 05:36:44.766229 | orchestrator | Saturday 11 April 2026 05:36:12 +0000 (0:00:01.911) 0:26:08.582 ******** 2026-04-11 05:36:44.766240 | orchestrator | ok: [testbed-node-0] 2026-04-11 05:36:44.766250 | orchestrator | 2026-04-11 05:36:44.766261 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-11 05:36:44.766272 | orchestrator | Saturday 11 April 2026 05:36:14 +0000 (0:00:02.334) 0:26:10.917 ******** 2026-04-11 05:36:44.766282 | orchestrator | changed: [testbed-node-0] 2026-04-11 05:36:44.766293 | orchestrator | 2026-04-11 05:36:44.766304 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-11 05:36:44.766314 | orchestrator | Saturday 11 April 2026 05:36:18 +0000 (0:00:03.802) 0:26:14.720 ******** 2026-04-11 05:36:44.766325 | orchestrator | skipping: [testbed-node-0] 2026-04-11 05:36:44.766336 | orchestrator | 2026-04-11 05:36:44.766346 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-11 05:36:44.766357 | orchestrator | 2026-04-11 05:36:44.766368 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-11 05:36:44.766379 | orchestrator | Saturday 11 April 2026 05:36:19 +0000 (0:00:01.025) 0:26:15.746 ******** 2026-04-11 05:36:44.766389 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:36:44.766400 | orchestrator | 2026-04-11 05:36:44.766430 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-11 05:36:44.766441 | orchestrator | Saturday 11 April 2026 05:36:22 +0000 (0:00:02.656) 0:26:18.402 ******** 2026-04-11 05:36:44.766452 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:36:44.766463 | orchestrator | 2026-04-11 05:36:44.766474 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:36:44.766485 | orchestrator | Saturday 11 April 2026 05:36:24 +0000 (0:00:02.134) 0:26:20.537 ******** 2026-04-11 05:36:44.766495 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-11 05:36:44.766506 | orchestrator | 2026-04-11 05:36:44.766517 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:36:44.766528 | orchestrator | Saturday 11 April 2026 05:36:25 +0000 (0:00:01.194) 0:26:21.732 ******** 2026-04-11 05:36:44.766539 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:36:44.766549 | orchestrator | 2026-04-11 05:36:44.766560 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:36:44.766578 | orchestrator | Saturday 11 April 2026 05:36:26 +0000 (0:00:01.460) 0:26:23.193 ******** 2026-04-11 05:36:44.766589 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:36:44.766600 | orchestrator | 2026-04-11 05:36:44.766611 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:36:44.766622 | orchestrator | Saturday 11 April 2026 05:36:28 +0000 (0:00:01.210) 0:26:24.403 ******** 2026-04-11 05:36:44.766632 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:36:44.766643 | orchestrator | 2026-04-11 05:36:44.766654 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:36:44.766664 | orchestrator | Saturday 11 April 2026 05:36:29 +0000 (0:00:01.489) 0:26:25.893 ******** 2026-04-11 05:36:44.766675 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:36:44.766686 | orchestrator | 2026-04-11 05:36:44.766696 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:36:44.766715 | orchestrator | Saturday 11 April 2026 05:36:30 +0000 (0:00:01.177) 0:26:27.070 ******** 2026-04-11 05:36:44.766726 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:36:44.766737 | orchestrator | 2026-04-11 05:36:44.766748 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:36:44.766758 | orchestrator | Saturday 11 April 2026 05:36:31 +0000 (0:00:01.138) 0:26:28.209 ******** 2026-04-11 05:36:44.766769 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:36:44.766780 | orchestrator | 2026-04-11 05:36:44.766790 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:36:44.766828 | orchestrator | Saturday 11 April 2026 05:36:33 +0000 (0:00:01.213) 0:26:29.422 ******** 2026-04-11 05:36:44.766838 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:36:44.766849 | orchestrator | 2026-04-11 05:36:44.766860 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:36:44.766870 | orchestrator | Saturday 11 April 2026 05:36:34 +0000 (0:00:01.160) 0:26:30.583 ******** 2026-04-11 05:36:44.766881 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:36:44.766892 | orchestrator | 2026-04-11 05:36:44.766903 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:36:44.766913 | orchestrator | Saturday 11 April 2026 05:36:35 +0000 (0:00:01.167) 0:26:31.751 ******** 2026-04-11 05:36:44.766924 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:36:44.766935 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:36:44.766946 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:36:44.766957 | orchestrator | 2026-04-11 05:36:44.766967 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:36:44.766978 | orchestrator | Saturday 11 April 2026 05:36:37 +0000 (0:00:02.076) 0:26:33.828 ******** 2026-04-11 05:36:44.766989 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:36:44.766999 | orchestrator | 2026-04-11 05:36:44.767010 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:36:44.767021 | orchestrator | Saturday 11 April 2026 05:36:38 +0000 (0:00:01.366) 0:26:35.195 ******** 2026-04-11 05:36:44.767031 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:36:44.767042 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:36:44.767053 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:36:44.767063 | orchestrator | 2026-04-11 05:36:44.767074 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:36:44.767085 | orchestrator | Saturday 11 April 2026 05:36:42 +0000 (0:00:03.777) 0:26:38.972 ******** 2026-04-11 05:36:44.767096 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 05:36:44.767107 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 05:36:44.767118 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 05:36:44.767129 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:36:44.767139 | orchestrator | 2026-04-11 05:36:44.767154 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:36:44.767172 | orchestrator | Saturday 11 April 2026 05:36:44 +0000 (0:00:01.420) 0:26:40.393 ******** 2026-04-11 05:36:44.767192 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:36:44.767215 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:36:44.767244 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:37:05.929730 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.929925 | orchestrator | 2026-04-11 05:37:05.929944 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:37:05.929958 | orchestrator | Saturday 11 April 2026 05:36:45 +0000 (0:00:01.645) 0:26:42.038 ******** 2026-04-11 05:37:05.929988 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:05.930005 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:05.930076 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:05.930090 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930102 | orchestrator | 2026-04-11 05:37:05.930113 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:37:05.930125 | orchestrator | Saturday 11 April 2026 05:36:46 +0000 (0:00:01.152) 0:26:43.191 ******** 2026-04-11 05:37:05.930140 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:36:39.498419', 'end': '2026-04-11 05:36:39.553973', 'delta': '0:00:00.055554', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:37:05.930157 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:36:40.046507', 'end': '2026-04-11 05:36:40.089001', 'delta': '0:00:00.042494', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:37:05.930169 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:36:40.564602', 'end': '2026-04-11 05:36:41.612206', 'delta': '0:00:01.047604', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:37:05.930203 | orchestrator | 2026-04-11 05:37:05.930216 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:37:05.930245 | orchestrator | Saturday 11 April 2026 05:36:48 +0000 (0:00:01.285) 0:26:44.477 ******** 2026-04-11 05:37:05.930258 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:37:05.930273 | orchestrator | 2026-04-11 05:37:05.930293 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:37:05.930311 | orchestrator | Saturday 11 April 2026 05:36:49 +0000 (0:00:01.268) 0:26:45.746 ******** 2026-04-11 05:37:05.930330 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930349 | orchestrator | 2026-04-11 05:37:05.930367 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:37:05.930386 | orchestrator | Saturday 11 April 2026 05:36:50 +0000 (0:00:01.231) 0:26:46.977 ******** 2026-04-11 05:37:05.930403 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:37:05.930421 | orchestrator | 2026-04-11 05:37:05.930440 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:37:05.930467 | orchestrator | Saturday 11 April 2026 05:36:51 +0000 (0:00:01.113) 0:26:48.091 ******** 2026-04-11 05:37:05.930488 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:37:05.930507 | orchestrator | 2026-04-11 05:37:05.930526 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:37:05.930547 | orchestrator | Saturday 11 April 2026 05:36:53 +0000 (0:00:01.885) 0:26:49.977 ******** 2026-04-11 05:37:05.930565 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:37:05.930585 | orchestrator | 2026-04-11 05:37:05.930604 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:37:05.930618 | orchestrator | Saturday 11 April 2026 05:36:54 +0000 (0:00:01.166) 0:26:51.143 ******** 2026-04-11 05:37:05.930629 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930640 | orchestrator | 2026-04-11 05:37:05.930651 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:37:05.930662 | orchestrator | Saturday 11 April 2026 05:36:56 +0000 (0:00:01.153) 0:26:52.297 ******** 2026-04-11 05:37:05.930673 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930683 | orchestrator | 2026-04-11 05:37:05.930694 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:37:05.930705 | orchestrator | Saturday 11 April 2026 05:36:57 +0000 (0:00:01.655) 0:26:53.952 ******** 2026-04-11 05:37:05.930715 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930726 | orchestrator | 2026-04-11 05:37:05.930737 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:37:05.930747 | orchestrator | Saturday 11 April 2026 05:36:58 +0000 (0:00:01.173) 0:26:55.126 ******** 2026-04-11 05:37:05.930758 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930792 | orchestrator | 2026-04-11 05:37:05.930803 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:37:05.930814 | orchestrator | Saturday 11 April 2026 05:37:00 +0000 (0:00:01.142) 0:26:56.268 ******** 2026-04-11 05:37:05.930825 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930836 | orchestrator | 2026-04-11 05:37:05.930846 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:37:05.930857 | orchestrator | Saturday 11 April 2026 05:37:01 +0000 (0:00:01.223) 0:26:57.492 ******** 2026-04-11 05:37:05.930867 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930878 | orchestrator | 2026-04-11 05:37:05.930889 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:37:05.930899 | orchestrator | Saturday 11 April 2026 05:37:02 +0000 (0:00:01.114) 0:26:58.607 ******** 2026-04-11 05:37:05.930921 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930932 | orchestrator | 2026-04-11 05:37:05.930943 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:37:05.930953 | orchestrator | Saturday 11 April 2026 05:37:03 +0000 (0:00:01.149) 0:26:59.757 ******** 2026-04-11 05:37:05.930964 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.930975 | orchestrator | 2026-04-11 05:37:05.930986 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:37:05.930997 | orchestrator | Saturday 11 April 2026 05:37:04 +0000 (0:00:01.130) 0:27:00.887 ******** 2026-04-11 05:37:05.931008 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:05.931018 | orchestrator | 2026-04-11 05:37:05.931029 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:37:05.931040 | orchestrator | Saturday 11 April 2026 05:37:05 +0000 (0:00:01.148) 0:27:02.035 ******** 2026-04-11 05:37:05.931052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:37:05.931064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:37:05.931086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:37:07.221807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:37:07.221919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:37:07.221939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:37:07.221960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:37:07.222014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c2a3b65', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:37:07.222147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:37:07.222172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:37:07.222185 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:07.222199 | orchestrator | 2026-04-11 05:37:07.222211 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:37:07.222222 | orchestrator | Saturday 11 April 2026 05:37:07 +0000 (0:00:01.303) 0:27:03.339 ******** 2026-04-11 05:37:07.222235 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:07.222259 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:07.222273 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:07.222289 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:07.222313 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:23.069789 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:23.069914 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:23.069955 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0c2a3b65', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c2a3b65-0cab-4606-87f3-af05935d1899-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:23.069990 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:23.070010 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:37:23.070074 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:23.070088 | orchestrator | 2026-04-11 05:37:23.070100 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:37:23.070112 | orchestrator | Saturday 11 April 2026 05:37:08 +0000 (0:00:01.219) 0:27:04.559 ******** 2026-04-11 05:37:23.070123 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:37:23.070147 | orchestrator | 2026-04-11 05:37:23.070158 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:37:23.070169 | orchestrator | Saturday 11 April 2026 05:37:09 +0000 (0:00:01.490) 0:27:06.049 ******** 2026-04-11 05:37:23.070179 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:37:23.070190 | orchestrator | 2026-04-11 05:37:23.070201 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:37:23.070212 | orchestrator | Saturday 11 April 2026 05:37:10 +0000 (0:00:01.139) 0:27:07.189 ******** 2026-04-11 05:37:23.070222 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:37:23.070233 | orchestrator | 2026-04-11 05:37:23.070244 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:37:23.070255 | orchestrator | Saturday 11 April 2026 05:37:12 +0000 (0:00:01.496) 0:27:08.685 ******** 2026-04-11 05:37:23.070268 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:23.070281 | orchestrator | 2026-04-11 05:37:23.070294 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:37:23.070306 | orchestrator | Saturday 11 April 2026 05:37:13 +0000 (0:00:01.189) 0:27:09.874 ******** 2026-04-11 05:37:23.070318 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:23.070331 | orchestrator | 2026-04-11 05:37:23.070343 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:37:23.070355 | orchestrator | Saturday 11 April 2026 05:37:14 +0000 (0:00:01.272) 0:27:11.147 ******** 2026-04-11 05:37:23.070368 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:23.070380 | orchestrator | 2026-04-11 05:37:23.070392 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:37:23.070405 | orchestrator | Saturday 11 April 2026 05:37:16 +0000 (0:00:01.142) 0:27:12.290 ******** 2026-04-11 05:37:23.070418 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-11 05:37:23.070431 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:37:23.070443 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-11 05:37:23.070455 | orchestrator | 2026-04-11 05:37:23.070468 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:37:23.070480 | orchestrator | Saturday 11 April 2026 05:37:17 +0000 (0:00:01.700) 0:27:13.991 ******** 2026-04-11 05:37:23.070493 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-11 05:37:23.070505 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-11 05:37:23.070518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-11 05:37:23.070530 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:23.070543 | orchestrator | 2026-04-11 05:37:23.070556 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:37:23.070568 | orchestrator | Saturday 11 April 2026 05:37:19 +0000 (0:00:01.229) 0:27:15.220 ******** 2026-04-11 05:37:23.070581 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:37:23.070593 | orchestrator | 2026-04-11 05:37:23.070605 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:37:23.070618 | orchestrator | Saturday 11 April 2026 05:37:20 +0000 (0:00:01.148) 0:27:16.369 ******** 2026-04-11 05:37:23.070629 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:37:23.070640 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:37:23.070651 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:37:23.070662 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:37:23.070673 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:37:23.070684 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:37:23.070695 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:37:23.070730 | orchestrator | 2026-04-11 05:37:23.070906 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:37:23.070917 | orchestrator | Saturday 11 April 2026 05:37:22 +0000 (0:00:01.884) 0:27:18.253 ******** 2026-04-11 05:37:23.070928 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:37:23.070939 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:37:23.070950 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:37:23.070969 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:38:03.559364 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:38:03.559479 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:38:03.559494 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:38:03.559506 | orchestrator | 2026-04-11 05:38:03.559519 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:38:03.559545 | orchestrator | Saturday 11 April 2026 05:37:24 +0000 (0:00:02.356) 0:27:20.610 ******** 2026-04-11 05:38:03.559557 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-11 05:38:03.559570 | orchestrator | 2026-04-11 05:38:03.559581 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:38:03.559592 | orchestrator | Saturday 11 April 2026 05:37:25 +0000 (0:00:01.126) 0:27:21.736 ******** 2026-04-11 05:38:03.559603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-11 05:38:03.559614 | orchestrator | 2026-04-11 05:38:03.559625 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:38:03.559636 | orchestrator | Saturday 11 April 2026 05:37:26 +0000 (0:00:01.136) 0:27:22.873 ******** 2026-04-11 05:38:03.559646 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.559658 | orchestrator | 2026-04-11 05:38:03.559669 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:38:03.559680 | orchestrator | Saturday 11 April 2026 05:37:28 +0000 (0:00:01.656) 0:27:24.529 ******** 2026-04-11 05:38:03.559740 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.559753 | orchestrator | 2026-04-11 05:38:03.559764 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:38:03.559775 | orchestrator | Saturday 11 April 2026 05:37:29 +0000 (0:00:01.197) 0:27:25.726 ******** 2026-04-11 05:38:03.559786 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.559797 | orchestrator | 2026-04-11 05:38:03.559808 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:38:03.559819 | orchestrator | Saturday 11 April 2026 05:37:30 +0000 (0:00:01.116) 0:27:26.843 ******** 2026-04-11 05:38:03.559829 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.559840 | orchestrator | 2026-04-11 05:38:03.559851 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:38:03.559862 | orchestrator | Saturday 11 April 2026 05:37:31 +0000 (0:00:01.152) 0:27:27.996 ******** 2026-04-11 05:38:03.559873 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.559884 | orchestrator | 2026-04-11 05:38:03.559895 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:38:03.559906 | orchestrator | Saturday 11 April 2026 05:37:33 +0000 (0:00:01.577) 0:27:29.574 ******** 2026-04-11 05:38:03.559917 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.559928 | orchestrator | 2026-04-11 05:38:03.559938 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:38:03.559949 | orchestrator | Saturday 11 April 2026 05:37:34 +0000 (0:00:01.116) 0:27:30.691 ******** 2026-04-11 05:38:03.559960 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.559971 | orchestrator | 2026-04-11 05:38:03.559982 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:38:03.560015 | orchestrator | Saturday 11 April 2026 05:37:35 +0000 (0:00:01.139) 0:27:31.830 ******** 2026-04-11 05:38:03.560027 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.560037 | orchestrator | 2026-04-11 05:38:03.560048 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:38:03.560059 | orchestrator | Saturday 11 April 2026 05:37:37 +0000 (0:00:01.553) 0:27:33.384 ******** 2026-04-11 05:38:03.560070 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.560081 | orchestrator | 2026-04-11 05:38:03.560091 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:38:03.560102 | orchestrator | Saturday 11 April 2026 05:37:38 +0000 (0:00:01.595) 0:27:34.980 ******** 2026-04-11 05:38:03.560113 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560123 | orchestrator | 2026-04-11 05:38:03.560135 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:38:03.560146 | orchestrator | Saturday 11 April 2026 05:37:39 +0000 (0:00:00.781) 0:27:35.761 ******** 2026-04-11 05:38:03.560156 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.560167 | orchestrator | 2026-04-11 05:38:03.560178 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:38:03.560188 | orchestrator | Saturday 11 April 2026 05:37:40 +0000 (0:00:00.769) 0:27:36.531 ******** 2026-04-11 05:38:03.560199 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560210 | orchestrator | 2026-04-11 05:38:03.560220 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:38:03.560231 | orchestrator | Saturday 11 April 2026 05:37:41 +0000 (0:00:00.760) 0:27:37.291 ******** 2026-04-11 05:38:03.560242 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560252 | orchestrator | 2026-04-11 05:38:03.560263 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:38:03.560274 | orchestrator | Saturday 11 April 2026 05:37:41 +0000 (0:00:00.778) 0:27:38.070 ******** 2026-04-11 05:38:03.560284 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560295 | orchestrator | 2026-04-11 05:38:03.560306 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:38:03.560316 | orchestrator | Saturday 11 April 2026 05:37:42 +0000 (0:00:00.847) 0:27:38.917 ******** 2026-04-11 05:38:03.560327 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560338 | orchestrator | 2026-04-11 05:38:03.560348 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:38:03.560363 | orchestrator | Saturday 11 April 2026 05:37:43 +0000 (0:00:00.780) 0:27:39.698 ******** 2026-04-11 05:38:03.560381 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560400 | orchestrator | 2026-04-11 05:38:03.560418 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:38:03.560458 | orchestrator | Saturday 11 April 2026 05:37:44 +0000 (0:00:00.803) 0:27:40.502 ******** 2026-04-11 05:38:03.560477 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.560495 | orchestrator | 2026-04-11 05:38:03.560512 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:38:03.560531 | orchestrator | Saturday 11 April 2026 05:37:45 +0000 (0:00:00.800) 0:27:41.302 ******** 2026-04-11 05:38:03.560550 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.560568 | orchestrator | 2026-04-11 05:38:03.560588 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:38:03.560614 | orchestrator | Saturday 11 April 2026 05:37:45 +0000 (0:00:00.810) 0:27:42.113 ******** 2026-04-11 05:38:03.560626 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.560636 | orchestrator | 2026-04-11 05:38:03.560647 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:38:03.560658 | orchestrator | Saturday 11 April 2026 05:37:46 +0000 (0:00:00.832) 0:27:42.946 ******** 2026-04-11 05:38:03.560669 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560679 | orchestrator | 2026-04-11 05:38:03.560722 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:38:03.560749 | orchestrator | Saturday 11 April 2026 05:37:47 +0000 (0:00:00.786) 0:27:43.732 ******** 2026-04-11 05:38:03.560760 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560771 | orchestrator | 2026-04-11 05:38:03.560782 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:38:03.560793 | orchestrator | Saturday 11 April 2026 05:37:48 +0000 (0:00:00.839) 0:27:44.572 ******** 2026-04-11 05:38:03.560804 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560814 | orchestrator | 2026-04-11 05:38:03.560825 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:38:03.560836 | orchestrator | Saturday 11 April 2026 05:37:49 +0000 (0:00:00.778) 0:27:45.351 ******** 2026-04-11 05:38:03.560847 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560857 | orchestrator | 2026-04-11 05:38:03.560868 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:38:03.560879 | orchestrator | Saturday 11 April 2026 05:37:49 +0000 (0:00:00.776) 0:27:46.128 ******** 2026-04-11 05:38:03.560890 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560901 | orchestrator | 2026-04-11 05:38:03.560911 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:38:03.560922 | orchestrator | Saturday 11 April 2026 05:37:50 +0000 (0:00:00.823) 0:27:46.951 ******** 2026-04-11 05:38:03.560933 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.560943 | orchestrator | 2026-04-11 05:38:03.560954 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:38:03.560968 | orchestrator | Saturday 11 April 2026 05:37:51 +0000 (0:00:00.805) 0:27:47.757 ******** 2026-04-11 05:38:03.560987 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.561005 | orchestrator | 2026-04-11 05:38:03.561023 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:38:03.561041 | orchestrator | Saturday 11 April 2026 05:37:52 +0000 (0:00:00.799) 0:27:48.556 ******** 2026-04-11 05:38:03.561060 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.561080 | orchestrator | 2026-04-11 05:38:03.561094 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:38:03.561105 | orchestrator | Saturday 11 April 2026 05:37:53 +0000 (0:00:00.849) 0:27:49.406 ******** 2026-04-11 05:38:03.561116 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.561127 | orchestrator | 2026-04-11 05:38:03.561138 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:38:03.561149 | orchestrator | Saturday 11 April 2026 05:37:53 +0000 (0:00:00.789) 0:27:50.196 ******** 2026-04-11 05:38:03.561160 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.561171 | orchestrator | 2026-04-11 05:38:03.561181 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:38:03.561192 | orchestrator | Saturday 11 April 2026 05:37:54 +0000 (0:00:00.802) 0:27:50.999 ******** 2026-04-11 05:38:03.561203 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.561214 | orchestrator | 2026-04-11 05:38:03.561224 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:38:03.561235 | orchestrator | Saturday 11 April 2026 05:37:55 +0000 (0:00:00.792) 0:27:51.792 ******** 2026-04-11 05:38:03.561246 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.561257 | orchestrator | 2026-04-11 05:38:03.561267 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:38:03.561278 | orchestrator | Saturday 11 April 2026 05:37:56 +0000 (0:00:00.781) 0:27:52.574 ******** 2026-04-11 05:38:03.561289 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.561300 | orchestrator | 2026-04-11 05:38:03.561311 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:38:03.561322 | orchestrator | Saturday 11 April 2026 05:37:58 +0000 (0:00:01.666) 0:27:54.240 ******** 2026-04-11 05:38:03.561333 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:03.561343 | orchestrator | 2026-04-11 05:38:03.561354 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:38:03.561386 | orchestrator | Saturday 11 April 2026 05:38:00 +0000 (0:00:02.119) 0:27:56.359 ******** 2026-04-11 05:38:03.561397 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-11 05:38:03.561408 | orchestrator | 2026-04-11 05:38:03.561419 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:38:03.561430 | orchestrator | Saturday 11 April 2026 05:38:01 +0000 (0:00:01.108) 0:27:57.468 ******** 2026-04-11 05:38:03.561441 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.561451 | orchestrator | 2026-04-11 05:38:03.561462 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:38:03.561473 | orchestrator | Saturday 11 April 2026 05:38:02 +0000 (0:00:01.125) 0:27:58.593 ******** 2026-04-11 05:38:03.561484 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:03.561494 | orchestrator | 2026-04-11 05:38:03.561505 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:38:03.561527 | orchestrator | Saturday 11 April 2026 05:38:03 +0000 (0:00:01.170) 0:27:59.763 ******** 2026-04-11 05:38:45.799737 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:38:45.799857 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:38:45.799874 | orchestrator | 2026-04-11 05:38:45.799887 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:38:45.799899 | orchestrator | Saturday 11 April 2026 05:38:05 +0000 (0:00:01.795) 0:28:01.559 ******** 2026-04-11 05:38:45.799910 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:45.799922 | orchestrator | 2026-04-11 05:38:45.799933 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:38:45.799944 | orchestrator | Saturday 11 April 2026 05:38:06 +0000 (0:00:01.559) 0:28:03.119 ******** 2026-04-11 05:38:45.799955 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.799969 | orchestrator | 2026-04-11 05:38:45.799980 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:38:45.799991 | orchestrator | Saturday 11 April 2026 05:38:08 +0000 (0:00:01.137) 0:28:04.257 ******** 2026-04-11 05:38:45.800002 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800013 | orchestrator | 2026-04-11 05:38:45.800024 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:38:45.800035 | orchestrator | Saturday 11 April 2026 05:38:08 +0000 (0:00:00.749) 0:28:05.007 ******** 2026-04-11 05:38:45.800045 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800056 | orchestrator | 2026-04-11 05:38:45.800067 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:38:45.800078 | orchestrator | Saturday 11 April 2026 05:38:09 +0000 (0:00:00.767) 0:28:05.774 ******** 2026-04-11 05:38:45.800088 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-11 05:38:45.800100 | orchestrator | 2026-04-11 05:38:45.800111 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:38:45.800121 | orchestrator | Saturday 11 April 2026 05:38:10 +0000 (0:00:01.211) 0:28:06.986 ******** 2026-04-11 05:38:45.800132 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:45.800143 | orchestrator | 2026-04-11 05:38:45.800154 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:38:45.800165 | orchestrator | Saturday 11 April 2026 05:38:12 +0000 (0:00:01.705) 0:28:08.691 ******** 2026-04-11 05:38:45.800176 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:38:45.800186 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:38:45.800197 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:38:45.800311 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800333 | orchestrator | 2026-04-11 05:38:45.800345 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:38:45.800378 | orchestrator | Saturday 11 April 2026 05:38:13 +0000 (0:00:01.152) 0:28:09.844 ******** 2026-04-11 05:38:45.800391 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800404 | orchestrator | 2026-04-11 05:38:45.800416 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:38:45.800429 | orchestrator | Saturday 11 April 2026 05:38:14 +0000 (0:00:01.154) 0:28:10.999 ******** 2026-04-11 05:38:45.800441 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800453 | orchestrator | 2026-04-11 05:38:45.800465 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:38:45.800477 | orchestrator | Saturday 11 April 2026 05:38:15 +0000 (0:00:01.167) 0:28:12.167 ******** 2026-04-11 05:38:45.800490 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800502 | orchestrator | 2026-04-11 05:38:45.800514 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:38:45.800526 | orchestrator | Saturday 11 April 2026 05:38:17 +0000 (0:00:01.149) 0:28:13.317 ******** 2026-04-11 05:38:45.800539 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800551 | orchestrator | 2026-04-11 05:38:45.800563 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:38:45.800576 | orchestrator | Saturday 11 April 2026 05:38:18 +0000 (0:00:01.130) 0:28:14.447 ******** 2026-04-11 05:38:45.800589 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800600 | orchestrator | 2026-04-11 05:38:45.800611 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:38:45.800622 | orchestrator | Saturday 11 April 2026 05:38:19 +0000 (0:00:00.771) 0:28:15.218 ******** 2026-04-11 05:38:45.800652 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:45.800664 | orchestrator | 2026-04-11 05:38:45.800674 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:38:45.800685 | orchestrator | Saturday 11 April 2026 05:38:21 +0000 (0:00:02.224) 0:28:17.442 ******** 2026-04-11 05:38:45.800696 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:45.800707 | orchestrator | 2026-04-11 05:38:45.800717 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:38:45.800728 | orchestrator | Saturday 11 April 2026 05:38:22 +0000 (0:00:00.786) 0:28:18.229 ******** 2026-04-11 05:38:45.800739 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-11 05:38:45.800749 | orchestrator | 2026-04-11 05:38:45.800760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:38:45.800771 | orchestrator | Saturday 11 April 2026 05:38:23 +0000 (0:00:01.115) 0:28:19.344 ******** 2026-04-11 05:38:45.800781 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800792 | orchestrator | 2026-04-11 05:38:45.800803 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:38:45.800813 | orchestrator | Saturday 11 April 2026 05:38:24 +0000 (0:00:01.107) 0:28:20.452 ******** 2026-04-11 05:38:45.800824 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800835 | orchestrator | 2026-04-11 05:38:45.800845 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:38:45.800875 | orchestrator | Saturday 11 April 2026 05:38:25 +0000 (0:00:01.167) 0:28:21.620 ******** 2026-04-11 05:38:45.800886 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800897 | orchestrator | 2026-04-11 05:38:45.800908 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:38:45.800918 | orchestrator | Saturday 11 April 2026 05:38:26 +0000 (0:00:01.171) 0:28:22.791 ******** 2026-04-11 05:38:45.800929 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800939 | orchestrator | 2026-04-11 05:38:45.800950 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:38:45.800966 | orchestrator | Saturday 11 April 2026 05:38:27 +0000 (0:00:01.175) 0:28:23.967 ******** 2026-04-11 05:38:45.800977 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.800996 | orchestrator | 2026-04-11 05:38:45.801007 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:38:45.801018 | orchestrator | Saturday 11 April 2026 05:38:28 +0000 (0:00:01.127) 0:28:25.095 ******** 2026-04-11 05:38:45.801029 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801039 | orchestrator | 2026-04-11 05:38:45.801050 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:38:45.801060 | orchestrator | Saturday 11 April 2026 05:38:30 +0000 (0:00:01.152) 0:28:26.248 ******** 2026-04-11 05:38:45.801071 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801081 | orchestrator | 2026-04-11 05:38:45.801092 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:38:45.801103 | orchestrator | Saturday 11 April 2026 05:38:31 +0000 (0:00:01.130) 0:28:27.378 ******** 2026-04-11 05:38:45.801114 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801124 | orchestrator | 2026-04-11 05:38:45.801135 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:38:45.801145 | orchestrator | Saturday 11 April 2026 05:38:32 +0000 (0:00:01.126) 0:28:28.505 ******** 2026-04-11 05:38:45.801156 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:38:45.801167 | orchestrator | 2026-04-11 05:38:45.801177 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:38:45.801188 | orchestrator | Saturday 11 April 2026 05:38:33 +0000 (0:00:00.913) 0:28:29.419 ******** 2026-04-11 05:38:45.801199 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-11 05:38:45.801210 | orchestrator | 2026-04-11 05:38:45.801220 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:38:45.801231 | orchestrator | Saturday 11 April 2026 05:38:34 +0000 (0:00:01.135) 0:28:30.554 ******** 2026-04-11 05:38:45.801242 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-11 05:38:45.801253 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-11 05:38:45.801264 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-11 05:38:45.801274 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-11 05:38:45.801285 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-11 05:38:45.801295 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-11 05:38:45.801306 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-11 05:38:45.801317 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:38:45.801327 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:38:45.801338 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:38:45.801349 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:38:45.801360 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:38:45.801370 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:38:45.801381 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:38:45.801392 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-11 05:38:45.801402 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-11 05:38:45.801413 | orchestrator | 2026-04-11 05:38:45.801423 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:38:45.801434 | orchestrator | Saturday 11 April 2026 05:38:40 +0000 (0:00:06.609) 0:28:37.164 ******** 2026-04-11 05:38:45.801444 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801455 | orchestrator | 2026-04-11 05:38:45.801466 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:38:45.801476 | orchestrator | Saturday 11 April 2026 05:38:41 +0000 (0:00:00.792) 0:28:37.957 ******** 2026-04-11 05:38:45.801487 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801498 | orchestrator | 2026-04-11 05:38:45.801508 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:38:45.801525 | orchestrator | Saturday 11 April 2026 05:38:42 +0000 (0:00:00.801) 0:28:38.759 ******** 2026-04-11 05:38:45.801536 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801546 | orchestrator | 2026-04-11 05:38:45.801557 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:38:45.801567 | orchestrator | Saturday 11 April 2026 05:38:43 +0000 (0:00:00.814) 0:28:39.574 ******** 2026-04-11 05:38:45.801578 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801589 | orchestrator | 2026-04-11 05:38:45.801599 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:38:45.801610 | orchestrator | Saturday 11 April 2026 05:38:44 +0000 (0:00:00.797) 0:28:40.372 ******** 2026-04-11 05:38:45.801621 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801653 | orchestrator | 2026-04-11 05:38:45.801664 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:38:45.801674 | orchestrator | Saturday 11 April 2026 05:38:44 +0000 (0:00:00.777) 0:28:41.149 ******** 2026-04-11 05:38:45.801685 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:38:45.801696 | orchestrator | 2026-04-11 05:38:45.801706 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:38:45.801718 | orchestrator | Saturday 11 April 2026 05:38:45 +0000 (0:00:00.798) 0:28:41.947 ******** 2026-04-11 05:38:45.801735 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.417997 | orchestrator | 2026-04-11 05:39:30.418168 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:39:30.418182 | orchestrator | Saturday 11 April 2026 05:38:46 +0000 (0:00:00.808) 0:28:42.756 ******** 2026-04-11 05:39:30.418192 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418202 | orchestrator | 2026-04-11 05:39:30.418210 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:39:30.418232 | orchestrator | Saturday 11 April 2026 05:38:47 +0000 (0:00:00.773) 0:28:43.530 ******** 2026-04-11 05:39:30.418241 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418249 | orchestrator | 2026-04-11 05:39:30.418257 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:39:30.418265 | orchestrator | Saturday 11 April 2026 05:38:48 +0000 (0:00:00.827) 0:28:44.357 ******** 2026-04-11 05:39:30.418272 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418280 | orchestrator | 2026-04-11 05:39:30.418288 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:39:30.418296 | orchestrator | Saturday 11 April 2026 05:38:48 +0000 (0:00:00.795) 0:28:45.152 ******** 2026-04-11 05:39:30.418304 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418311 | orchestrator | 2026-04-11 05:39:30.418319 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:39:30.418327 | orchestrator | Saturday 11 April 2026 05:38:49 +0000 (0:00:00.840) 0:28:45.993 ******** 2026-04-11 05:39:30.418335 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418343 | orchestrator | 2026-04-11 05:39:30.418350 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:39:30.418358 | orchestrator | Saturday 11 April 2026 05:38:50 +0000 (0:00:00.825) 0:28:46.819 ******** 2026-04-11 05:39:30.418366 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418374 | orchestrator | 2026-04-11 05:39:30.418381 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:39:30.418390 | orchestrator | Saturday 11 April 2026 05:38:51 +0000 (0:00:00.869) 0:28:47.689 ******** 2026-04-11 05:39:30.418398 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418406 | orchestrator | 2026-04-11 05:39:30.418414 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:39:30.418421 | orchestrator | Saturday 11 April 2026 05:38:52 +0000 (0:00:00.797) 0:28:48.486 ******** 2026-04-11 05:39:30.418429 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418453 | orchestrator | 2026-04-11 05:39:30.418462 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:39:30.418469 | orchestrator | Saturday 11 April 2026 05:38:53 +0000 (0:00:00.875) 0:28:49.362 ******** 2026-04-11 05:39:30.418477 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418485 | orchestrator | 2026-04-11 05:39:30.418493 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:39:30.418501 | orchestrator | Saturday 11 April 2026 05:38:53 +0000 (0:00:00.771) 0:28:50.134 ******** 2026-04-11 05:39:30.418508 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418516 | orchestrator | 2026-04-11 05:39:30.418524 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:39:30.418532 | orchestrator | Saturday 11 April 2026 05:38:54 +0000 (0:00:00.786) 0:28:50.920 ******** 2026-04-11 05:39:30.418542 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418556 | orchestrator | 2026-04-11 05:39:30.418569 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:39:30.418603 | orchestrator | Saturday 11 April 2026 05:38:55 +0000 (0:00:00.835) 0:28:51.756 ******** 2026-04-11 05:39:30.418617 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418630 | orchestrator | 2026-04-11 05:39:30.418644 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:39:30.418657 | orchestrator | Saturday 11 April 2026 05:38:56 +0000 (0:00:00.777) 0:28:52.533 ******** 2026-04-11 05:39:30.418671 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418684 | orchestrator | 2026-04-11 05:39:30.418696 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:39:30.418710 | orchestrator | Saturday 11 April 2026 05:38:57 +0000 (0:00:00.798) 0:28:53.332 ******** 2026-04-11 05:39:30.418722 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418736 | orchestrator | 2026-04-11 05:39:30.418750 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:39:30.418764 | orchestrator | Saturday 11 April 2026 05:38:57 +0000 (0:00:00.821) 0:28:54.154 ******** 2026-04-11 05:39:30.418777 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:39:30.418790 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:39:30.418804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:39:30.418818 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418832 | orchestrator | 2026-04-11 05:39:30.418845 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:39:30.418859 | orchestrator | Saturday 11 April 2026 05:38:59 +0000 (0:00:01.452) 0:28:55.606 ******** 2026-04-11 05:39:30.418873 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:39:30.418887 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:39:30.418900 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:39:30.418915 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.418924 | orchestrator | 2026-04-11 05:39:30.418932 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:39:30.418940 | orchestrator | Saturday 11 April 2026 05:39:00 +0000 (0:00:01.435) 0:28:57.042 ******** 2026-04-11 05:39:30.418947 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-11 05:39:30.418955 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-11 05:39:30.418963 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-11 05:39:30.418987 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.419000 | orchestrator | 2026-04-11 05:39:30.419013 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:39:30.419027 | orchestrator | Saturday 11 April 2026 05:39:02 +0000 (0:00:01.422) 0:28:58.465 ******** 2026-04-11 05:39:30.419041 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.419067 | orchestrator | 2026-04-11 05:39:30.419081 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:39:30.419101 | orchestrator | Saturday 11 April 2026 05:39:03 +0000 (0:00:00.773) 0:28:59.238 ******** 2026-04-11 05:39:30.419117 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-11 05:39:30.419131 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.419145 | orchestrator | 2026-04-11 05:39:30.419158 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:39:30.419173 | orchestrator | Saturday 11 April 2026 05:39:03 +0000 (0:00:00.867) 0:29:00.106 ******** 2026-04-11 05:39:30.419186 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:39:30.419201 | orchestrator | 2026-04-11 05:39:30.419209 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:39:30.419217 | orchestrator | Saturday 11 April 2026 05:39:05 +0000 (0:00:01.442) 0:29:01.549 ******** 2026-04-11 05:39:30.419224 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:39:30.419233 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-11 05:39:30.419241 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:39:30.419249 | orchestrator | 2026-04-11 05:39:30.419256 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-11 05:39:30.419264 | orchestrator | Saturday 11 April 2026 05:39:06 +0000 (0:00:01.369) 0:29:02.919 ******** 2026-04-11 05:39:30.419272 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-04-11 05:39:30.419285 | orchestrator | 2026-04-11 05:39:30.419299 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-11 05:39:30.419312 | orchestrator | Saturday 11 April 2026 05:39:07 +0000 (0:00:01.203) 0:29:04.123 ******** 2026-04-11 05:39:30.419325 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:39:30.419340 | orchestrator | 2026-04-11 05:39:30.419353 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-11 05:39:30.419366 | orchestrator | Saturday 11 April 2026 05:39:09 +0000 (0:00:01.545) 0:29:05.668 ******** 2026-04-11 05:39:30.419379 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.419393 | orchestrator | 2026-04-11 05:39:30.419407 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-11 05:39:30.419420 | orchestrator | Saturday 11 April 2026 05:39:10 +0000 (0:00:01.163) 0:29:06.832 ******** 2026-04-11 05:39:30.419434 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:39:30.419446 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:39:30.419459 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:39:30.419467 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-04-11 05:39:30.419475 | orchestrator | 2026-04-11 05:39:30.419483 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-11 05:39:30.419490 | orchestrator | Saturday 11 April 2026 05:39:18 +0000 (0:00:07.488) 0:29:14.320 ******** 2026-04-11 05:39:30.419498 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:39:30.419506 | orchestrator | 2026-04-11 05:39:30.419514 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-11 05:39:30.419521 | orchestrator | Saturday 11 April 2026 05:39:19 +0000 (0:00:01.167) 0:29:15.488 ******** 2026-04-11 05:39:30.419529 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 05:39:30.419537 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-11 05:39:30.419544 | orchestrator | 2026-04-11 05:39:30.419552 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-11 05:39:30.419560 | orchestrator | Saturday 11 April 2026 05:39:22 +0000 (0:00:03.574) 0:29:19.062 ******** 2026-04-11 05:39:30.419567 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 05:39:30.419577 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-11 05:39:30.419643 | orchestrator | 2026-04-11 05:39:30.419658 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-11 05:39:30.419672 | orchestrator | Saturday 11 April 2026 05:39:24 +0000 (0:00:02.000) 0:29:21.062 ******** 2026-04-11 05:39:30.419684 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:39:30.419697 | orchestrator | 2026-04-11 05:39:30.419711 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-11 05:39:30.419725 | orchestrator | Saturday 11 April 2026 05:39:26 +0000 (0:00:01.591) 0:29:22.654 ******** 2026-04-11 05:39:30.419738 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.419753 | orchestrator | 2026-04-11 05:39:30.419765 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-11 05:39:30.419780 | orchestrator | Saturday 11 April 2026 05:39:27 +0000 (0:00:00.798) 0:29:23.453 ******** 2026-04-11 05:39:30.419793 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.419807 | orchestrator | 2026-04-11 05:39:30.419820 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-11 05:39:30.419833 | orchestrator | Saturday 11 April 2026 05:39:28 +0000 (0:00:00.823) 0:29:24.276 ******** 2026-04-11 05:39:30.419845 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-04-11 05:39:30.419857 | orchestrator | 2026-04-11 05:39:30.419870 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-11 05:39:30.419883 | orchestrator | Saturday 11 April 2026 05:39:29 +0000 (0:00:01.137) 0:29:25.413 ******** 2026-04-11 05:39:30.419897 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:39:30.419910 | orchestrator | 2026-04-11 05:39:30.419923 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-11 05:39:30.419946 | orchestrator | Saturday 11 April 2026 05:39:30 +0000 (0:00:01.205) 0:29:26.619 ******** 2026-04-11 05:40:10.170656 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:40:10.170780 | orchestrator | 2026-04-11 05:40:10.170798 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-11 05:40:10.170811 | orchestrator | Saturday 11 April 2026 05:39:31 +0000 (0:00:01.177) 0:29:27.797 ******** 2026-04-11 05:40:10.170822 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-04-11 05:40:10.170833 | orchestrator | 2026-04-11 05:40:10.170858 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-11 05:40:10.170870 | orchestrator | Saturday 11 April 2026 05:39:32 +0000 (0:00:01.168) 0:29:28.965 ******** 2026-04-11 05:40:10.170881 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:40:10.170892 | orchestrator | 2026-04-11 05:40:10.170904 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-11 05:40:10.170915 | orchestrator | Saturday 11 April 2026 05:39:34 +0000 (0:00:02.101) 0:29:31.067 ******** 2026-04-11 05:40:10.170925 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:40:10.170936 | orchestrator | 2026-04-11 05:40:10.170947 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-11 05:40:10.170958 | orchestrator | Saturday 11 April 2026 05:39:36 +0000 (0:00:02.005) 0:29:33.072 ******** 2026-04-11 05:40:10.170969 | orchestrator | ok: [testbed-node-1] 2026-04-11 05:40:10.170979 | orchestrator | 2026-04-11 05:40:10.170990 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-11 05:40:10.171001 | orchestrator | Saturday 11 April 2026 05:39:39 +0000 (0:00:02.576) 0:29:35.648 ******** 2026-04-11 05:40:10.171012 | orchestrator | changed: [testbed-node-1] 2026-04-11 05:40:10.171023 | orchestrator | 2026-04-11 05:40:10.171034 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-11 05:40:10.171048 | orchestrator | Saturday 11 April 2026 05:39:42 +0000 (0:00:03.524) 0:29:39.173 ******** 2026-04-11 05:40:10.171066 | orchestrator | skipping: [testbed-node-1] 2026-04-11 05:40:10.171086 | orchestrator | 2026-04-11 05:40:10.171106 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-11 05:40:10.171126 | orchestrator | 2026-04-11 05:40:10.171145 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-11 05:40:10.171182 | orchestrator | Saturday 11 April 2026 05:39:44 +0000 (0:00:01.062) 0:29:40.236 ******** 2026-04-11 05:40:10.171196 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:40:10.171208 | orchestrator | 2026-04-11 05:40:10.171221 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-11 05:40:10.171233 | orchestrator | Saturday 11 April 2026 05:39:46 +0000 (0:00:02.589) 0:29:42.826 ******** 2026-04-11 05:40:10.171246 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:40:10.171258 | orchestrator | 2026-04-11 05:40:10.171270 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:40:10.171281 | orchestrator | Saturday 11 April 2026 05:39:48 +0000 (0:00:02.171) 0:29:44.997 ******** 2026-04-11 05:40:10.171291 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-11 05:40:10.171302 | orchestrator | 2026-04-11 05:40:10.171313 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:40:10.171324 | orchestrator | Saturday 11 April 2026 05:39:49 +0000 (0:00:01.126) 0:29:46.123 ******** 2026-04-11 05:40:10.171335 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:10.171346 | orchestrator | 2026-04-11 05:40:10.171356 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:40:10.171367 | orchestrator | Saturday 11 April 2026 05:39:51 +0000 (0:00:01.493) 0:29:47.616 ******** 2026-04-11 05:40:10.171378 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:10.171389 | orchestrator | 2026-04-11 05:40:10.171400 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:40:10.171410 | orchestrator | Saturday 11 April 2026 05:39:52 +0000 (0:00:01.102) 0:29:48.719 ******** 2026-04-11 05:40:10.171421 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:10.171432 | orchestrator | 2026-04-11 05:40:10.171442 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:40:10.171453 | orchestrator | Saturday 11 April 2026 05:39:53 +0000 (0:00:01.486) 0:29:50.206 ******** 2026-04-11 05:40:10.171464 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:10.171475 | orchestrator | 2026-04-11 05:40:10.171485 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:40:10.171496 | orchestrator | Saturday 11 April 2026 05:39:55 +0000 (0:00:01.117) 0:29:51.323 ******** 2026-04-11 05:40:10.171507 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:10.171517 | orchestrator | 2026-04-11 05:40:10.171528 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:40:10.171539 | orchestrator | Saturday 11 April 2026 05:39:56 +0000 (0:00:01.156) 0:29:52.480 ******** 2026-04-11 05:40:10.171575 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:10.171587 | orchestrator | 2026-04-11 05:40:10.171598 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:40:10.171609 | orchestrator | Saturday 11 April 2026 05:39:57 +0000 (0:00:01.158) 0:29:53.638 ******** 2026-04-11 05:40:10.171620 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:10.171630 | orchestrator | 2026-04-11 05:40:10.171641 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:40:10.171651 | orchestrator | Saturday 11 April 2026 05:39:58 +0000 (0:00:01.185) 0:29:54.824 ******** 2026-04-11 05:40:10.171662 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:10.171673 | orchestrator | 2026-04-11 05:40:10.171683 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:40:10.171694 | orchestrator | Saturday 11 April 2026 05:39:59 +0000 (0:00:01.110) 0:29:55.934 ******** 2026-04-11 05:40:10.171705 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:40:10.171715 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:40:10.171726 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:40:10.171736 | orchestrator | 2026-04-11 05:40:10.171747 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:40:10.171783 | orchestrator | Saturday 11 April 2026 05:40:01 +0000 (0:00:01.793) 0:29:57.727 ******** 2026-04-11 05:40:10.171795 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:10.171806 | orchestrator | 2026-04-11 05:40:10.171817 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:40:10.171827 | orchestrator | Saturday 11 April 2026 05:40:02 +0000 (0:00:01.280) 0:29:59.008 ******** 2026-04-11 05:40:10.171844 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:40:10.171855 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:40:10.171865 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:40:10.171876 | orchestrator | 2026-04-11 05:40:10.171887 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:40:10.171899 | orchestrator | Saturday 11 April 2026 05:40:05 +0000 (0:00:02.867) 0:30:01.876 ******** 2026-04-11 05:40:10.171918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 05:40:10.171937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 05:40:10.171955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 05:40:10.171967 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:10.171978 | orchestrator | 2026-04-11 05:40:10.171989 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:40:10.171999 | orchestrator | Saturday 11 April 2026 05:40:07 +0000 (0:00:01.357) 0:30:03.233 ******** 2026-04-11 05:40:10.172011 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:40:10.172025 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:40:10.172036 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:40:10.172047 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:10.172058 | orchestrator | 2026-04-11 05:40:10.172068 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:40:10.172079 | orchestrator | Saturday 11 April 2026 05:40:08 +0000 (0:00:01.857) 0:30:05.091 ******** 2026-04-11 05:40:10.172092 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:10.172105 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:10.172116 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:10.172136 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:10.172147 | orchestrator | 2026-04-11 05:40:10.172158 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:40:10.172168 | orchestrator | Saturday 11 April 2026 05:40:10 +0000 (0:00:01.210) 0:30:06.302 ******** 2026-04-11 05:40:10.172190 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:40:03.325377', 'end': '2026-04-11 05:40:03.370539', 'delta': '0:00:00.045162', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:40:28.466221 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:40:03.890407', 'end': '2026-04-11 05:40:03.952216', 'delta': '0:00:00.061809', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:40:28.466339 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:40:04.467442', 'end': '2026-04-11 05:40:04.526311', 'delta': '0:00:00.058869', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:40:28.466453 | orchestrator | 2026-04-11 05:40:28.466479 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:40:28.466499 | orchestrator | Saturday 11 April 2026 05:40:11 +0000 (0:00:01.137) 0:30:07.440 ******** 2026-04-11 05:40:28.466519 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:28.466567 | orchestrator | 2026-04-11 05:40:28.466587 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:40:28.466604 | orchestrator | Saturday 11 April 2026 05:40:12 +0000 (0:00:01.219) 0:30:08.659 ******** 2026-04-11 05:40:28.466616 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.466628 | orchestrator | 2026-04-11 05:40:28.466639 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:40:28.466651 | orchestrator | Saturday 11 April 2026 05:40:13 +0000 (0:00:01.254) 0:30:09.914 ******** 2026-04-11 05:40:28.466664 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:28.466677 | orchestrator | 2026-04-11 05:40:28.466690 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:40:28.466702 | orchestrator | Saturday 11 April 2026 05:40:14 +0000 (0:00:01.182) 0:30:11.097 ******** 2026-04-11 05:40:28.466715 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:40:28.466727 | orchestrator | 2026-04-11 05:40:28.466740 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:40:28.466780 | orchestrator | Saturday 11 April 2026 05:40:16 +0000 (0:00:01.964) 0:30:13.061 ******** 2026-04-11 05:40:28.466800 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:40:28.466818 | orchestrator | 2026-04-11 05:40:28.466838 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:40:28.466858 | orchestrator | Saturday 11 April 2026 05:40:18 +0000 (0:00:01.155) 0:30:14.216 ******** 2026-04-11 05:40:28.466878 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.466897 | orchestrator | 2026-04-11 05:40:28.466916 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:40:28.466936 | orchestrator | Saturday 11 April 2026 05:40:19 +0000 (0:00:01.105) 0:30:15.322 ******** 2026-04-11 05:40:28.466956 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.466976 | orchestrator | 2026-04-11 05:40:28.466990 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:40:28.467003 | orchestrator | Saturday 11 April 2026 05:40:20 +0000 (0:00:01.256) 0:30:16.578 ******** 2026-04-11 05:40:28.467014 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.467025 | orchestrator | 2026-04-11 05:40:28.467036 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:40:28.467046 | orchestrator | Saturday 11 April 2026 05:40:21 +0000 (0:00:01.096) 0:30:17.675 ******** 2026-04-11 05:40:28.467057 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.467068 | orchestrator | 2026-04-11 05:40:28.467078 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:40:28.467089 | orchestrator | Saturday 11 April 2026 05:40:22 +0000 (0:00:01.116) 0:30:18.792 ******** 2026-04-11 05:40:28.467100 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.467110 | orchestrator | 2026-04-11 05:40:28.467121 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:40:28.467132 | orchestrator | Saturday 11 April 2026 05:40:23 +0000 (0:00:01.119) 0:30:19.911 ******** 2026-04-11 05:40:28.467142 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.467153 | orchestrator | 2026-04-11 05:40:28.467164 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:40:28.467174 | orchestrator | Saturday 11 April 2026 05:40:24 +0000 (0:00:01.150) 0:30:21.062 ******** 2026-04-11 05:40:28.467185 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.467196 | orchestrator | 2026-04-11 05:40:28.467206 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:40:28.467238 | orchestrator | Saturday 11 April 2026 05:40:26 +0000 (0:00:01.162) 0:30:22.225 ******** 2026-04-11 05:40:28.467250 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.467261 | orchestrator | 2026-04-11 05:40:28.467271 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:40:28.467283 | orchestrator | Saturday 11 April 2026 05:40:27 +0000 (0:00:01.118) 0:30:23.344 ******** 2026-04-11 05:40:28.467304 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:28.467315 | orchestrator | 2026-04-11 05:40:28.467325 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:40:28.467336 | orchestrator | Saturday 11 April 2026 05:40:28 +0000 (0:00:01.193) 0:30:24.537 ******** 2026-04-11 05:40:28.467349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:40:28.467363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:40:28.467384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:40:28.467396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:40:28.467409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:40:28.467421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:40:28.467432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:40:28.467464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e1b70df', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:40:29.726975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:40:29.727086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:40:29.727103 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:40:29.727117 | orchestrator | 2026-04-11 05:40:29.727129 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:40:29.727141 | orchestrator | Saturday 11 April 2026 05:40:29 +0000 (0:00:01.249) 0:30:25.787 ******** 2026-04-11 05:40:29.727154 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:29.727167 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:29.727196 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:29.727209 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-25-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:29.727261 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:29.727274 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:29.727285 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:29.727306 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e1b70df', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1', 'scsi-SQEMU_QEMU_HARDDISK_6e1b70df-e983-45b9-8c79-0f15e5c6cff7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:40:29.727337 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:41:07.086404 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:41:07.086611 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.086637 | orchestrator | 2026-04-11 05:41:07.086654 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:41:07.086668 | orchestrator | Saturday 11 April 2026 05:40:30 +0000 (0:00:01.365) 0:30:27.152 ******** 2026-04-11 05:41:07.086681 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:07.086694 | orchestrator | 2026-04-11 05:41:07.086707 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:41:07.086719 | orchestrator | Saturday 11 April 2026 05:40:32 +0000 (0:00:01.567) 0:30:28.720 ******** 2026-04-11 05:41:07.086731 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:07.086744 | orchestrator | 2026-04-11 05:41:07.086757 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:41:07.086768 | orchestrator | Saturday 11 April 2026 05:40:33 +0000 (0:00:01.121) 0:30:29.842 ******** 2026-04-11 05:41:07.086780 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:07.086794 | orchestrator | 2026-04-11 05:41:07.086806 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:41:07.086819 | orchestrator | Saturday 11 April 2026 05:40:35 +0000 (0:00:01.503) 0:30:31.345 ******** 2026-04-11 05:41:07.086832 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.086845 | orchestrator | 2026-04-11 05:41:07.086857 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:41:07.086869 | orchestrator | Saturday 11 April 2026 05:40:36 +0000 (0:00:01.151) 0:30:32.496 ******** 2026-04-11 05:41:07.086881 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.086892 | orchestrator | 2026-04-11 05:41:07.086902 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:41:07.086913 | orchestrator | Saturday 11 April 2026 05:40:37 +0000 (0:00:01.272) 0:30:33.769 ******** 2026-04-11 05:41:07.086926 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.086939 | orchestrator | 2026-04-11 05:41:07.086954 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:41:07.086991 | orchestrator | Saturday 11 April 2026 05:40:38 +0000 (0:00:01.199) 0:30:34.968 ******** 2026-04-11 05:41:07.087004 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-11 05:41:07.087017 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-11 05:41:07.087030 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:41:07.087045 | orchestrator | 2026-04-11 05:41:07.087059 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:41:07.087073 | orchestrator | Saturday 11 April 2026 05:40:40 +0000 (0:00:01.651) 0:30:36.620 ******** 2026-04-11 05:41:07.087088 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-11 05:41:07.087117 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-11 05:41:07.087132 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-11 05:41:07.087145 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.087158 | orchestrator | 2026-04-11 05:41:07.087172 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:41:07.087185 | orchestrator | Saturday 11 April 2026 05:40:41 +0000 (0:00:01.156) 0:30:37.777 ******** 2026-04-11 05:41:07.087197 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.087208 | orchestrator | 2026-04-11 05:41:07.087221 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:41:07.087234 | orchestrator | Saturday 11 April 2026 05:40:42 +0000 (0:00:01.137) 0:30:38.914 ******** 2026-04-11 05:41:07.087247 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:41:07.087259 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:41:07.087271 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:41:07.087284 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:41:07.087296 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:41:07.087308 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:41:07.087320 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:41:07.087331 | orchestrator | 2026-04-11 05:41:07.087343 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:41:07.087355 | orchestrator | Saturday 11 April 2026 05:40:44 +0000 (0:00:02.210) 0:30:41.125 ******** 2026-04-11 05:41:07.087368 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:41:07.087381 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:41:07.087394 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:41:07.087408 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:41:07.087440 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:41:07.087452 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:41:07.087464 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:41:07.087476 | orchestrator | 2026-04-11 05:41:07.087487 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:41:07.087534 | orchestrator | Saturday 11 April 2026 05:40:47 +0000 (0:00:02.261) 0:30:43.387 ******** 2026-04-11 05:41:07.087546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-11 05:41:07.087559 | orchestrator | 2026-04-11 05:41:07.087572 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:41:07.087584 | orchestrator | Saturday 11 April 2026 05:40:48 +0000 (0:00:01.242) 0:30:44.629 ******** 2026-04-11 05:41:07.087597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-11 05:41:07.087616 | orchestrator | 2026-04-11 05:41:07.087627 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:41:07.087639 | orchestrator | Saturday 11 April 2026 05:40:49 +0000 (0:00:01.141) 0:30:45.771 ******** 2026-04-11 05:41:07.087650 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:07.087661 | orchestrator | 2026-04-11 05:41:07.087672 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:41:07.087684 | orchestrator | Saturday 11 April 2026 05:40:51 +0000 (0:00:01.502) 0:30:47.273 ******** 2026-04-11 05:41:07.087696 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.087709 | orchestrator | 2026-04-11 05:41:07.087722 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:41:07.087734 | orchestrator | Saturday 11 April 2026 05:40:52 +0000 (0:00:01.107) 0:30:48.381 ******** 2026-04-11 05:41:07.087746 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.087759 | orchestrator | 2026-04-11 05:41:07.087770 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:41:07.087783 | orchestrator | Saturday 11 April 2026 05:40:53 +0000 (0:00:01.114) 0:30:49.495 ******** 2026-04-11 05:41:07.087795 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.087808 | orchestrator | 2026-04-11 05:41:07.087820 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:41:07.087833 | orchestrator | Saturday 11 April 2026 05:40:54 +0000 (0:00:01.143) 0:30:50.640 ******** 2026-04-11 05:41:07.087845 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:07.087858 | orchestrator | 2026-04-11 05:41:07.087869 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:41:07.087880 | orchestrator | Saturday 11 April 2026 05:40:56 +0000 (0:00:01.604) 0:30:52.244 ******** 2026-04-11 05:41:07.087891 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.087902 | orchestrator | 2026-04-11 05:41:07.087914 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:41:07.087925 | orchestrator | Saturday 11 April 2026 05:40:57 +0000 (0:00:01.141) 0:30:53.385 ******** 2026-04-11 05:41:07.087937 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.087949 | orchestrator | 2026-04-11 05:41:07.087962 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:41:07.087974 | orchestrator | Saturday 11 April 2026 05:40:58 +0000 (0:00:01.191) 0:30:54.577 ******** 2026-04-11 05:41:07.087986 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:07.087998 | orchestrator | 2026-04-11 05:41:07.088009 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:41:07.088026 | orchestrator | Saturday 11 April 2026 05:40:59 +0000 (0:00:01.548) 0:30:56.126 ******** 2026-04-11 05:41:07.088037 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:07.088049 | orchestrator | 2026-04-11 05:41:07.088061 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:41:07.088073 | orchestrator | Saturday 11 April 2026 05:41:01 +0000 (0:00:01.543) 0:30:57.669 ******** 2026-04-11 05:41:07.088085 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.088098 | orchestrator | 2026-04-11 05:41:07.088109 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:41:07.088121 | orchestrator | Saturday 11 April 2026 05:41:02 +0000 (0:00:00.791) 0:30:58.461 ******** 2026-04-11 05:41:07.088133 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:07.088145 | orchestrator | 2026-04-11 05:41:07.088155 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:41:07.088167 | orchestrator | Saturday 11 April 2026 05:41:03 +0000 (0:00:00.792) 0:30:59.253 ******** 2026-04-11 05:41:07.088179 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.088191 | orchestrator | 2026-04-11 05:41:07.088203 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:41:07.088216 | orchestrator | Saturday 11 April 2026 05:41:03 +0000 (0:00:00.846) 0:31:00.100 ******** 2026-04-11 05:41:07.088227 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.088248 | orchestrator | 2026-04-11 05:41:07.088261 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:41:07.088273 | orchestrator | Saturday 11 April 2026 05:41:04 +0000 (0:00:00.790) 0:31:00.891 ******** 2026-04-11 05:41:07.088285 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.088297 | orchestrator | 2026-04-11 05:41:07.088309 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:41:07.088322 | orchestrator | Saturday 11 April 2026 05:41:05 +0000 (0:00:00.765) 0:31:01.657 ******** 2026-04-11 05:41:07.088334 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.088346 | orchestrator | 2026-04-11 05:41:07.088358 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:41:07.088371 | orchestrator | Saturday 11 April 2026 05:41:06 +0000 (0:00:00.757) 0:31:02.415 ******** 2026-04-11 05:41:07.088382 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:07.088394 | orchestrator | 2026-04-11 05:41:07.088405 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:41:07.088418 | orchestrator | Saturday 11 April 2026 05:41:07 +0000 (0:00:00.806) 0:31:03.221 ******** 2026-04-11 05:41:07.088437 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.790769 | orchestrator | 2026-04-11 05:41:48.790866 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:41:48.790878 | orchestrator | Saturday 11 April 2026 05:41:07 +0000 (0:00:00.787) 0:31:04.008 ******** 2026-04-11 05:41:48.790885 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.790894 | orchestrator | 2026-04-11 05:41:48.790901 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:41:48.790908 | orchestrator | Saturday 11 April 2026 05:41:08 +0000 (0:00:00.894) 0:31:04.902 ******** 2026-04-11 05:41:48.790916 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.790923 | orchestrator | 2026-04-11 05:41:48.790930 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:41:48.790937 | orchestrator | Saturday 11 April 2026 05:41:09 +0000 (0:00:00.812) 0:31:05.715 ******** 2026-04-11 05:41:48.790944 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.790952 | orchestrator | 2026-04-11 05:41:48.790959 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:41:48.790966 | orchestrator | Saturday 11 April 2026 05:41:10 +0000 (0:00:00.785) 0:31:06.500 ******** 2026-04-11 05:41:48.790973 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.790980 | orchestrator | 2026-04-11 05:41:48.790988 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:41:48.790995 | orchestrator | Saturday 11 April 2026 05:41:11 +0000 (0:00:00.800) 0:31:07.301 ******** 2026-04-11 05:41:48.791001 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791008 | orchestrator | 2026-04-11 05:41:48.791015 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:41:48.791022 | orchestrator | Saturday 11 April 2026 05:41:11 +0000 (0:00:00.768) 0:31:08.070 ******** 2026-04-11 05:41:48.791029 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791036 | orchestrator | 2026-04-11 05:41:48.791043 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:41:48.791050 | orchestrator | Saturday 11 April 2026 05:41:12 +0000 (0:00:00.825) 0:31:08.895 ******** 2026-04-11 05:41:48.791056 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791063 | orchestrator | 2026-04-11 05:41:48.791070 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:41:48.791077 | orchestrator | Saturday 11 April 2026 05:41:13 +0000 (0:00:00.799) 0:31:09.695 ******** 2026-04-11 05:41:48.791084 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791099 | orchestrator | 2026-04-11 05:41:48.791106 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:41:48.791113 | orchestrator | Saturday 11 April 2026 05:41:14 +0000 (0:00:00.797) 0:31:10.493 ******** 2026-04-11 05:41:48.791120 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791145 | orchestrator | 2026-04-11 05:41:48.791152 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:41:48.791159 | orchestrator | Saturday 11 April 2026 05:41:15 +0000 (0:00:00.758) 0:31:11.251 ******** 2026-04-11 05:41:48.791166 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791173 | orchestrator | 2026-04-11 05:41:48.791180 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:41:48.791187 | orchestrator | Saturday 11 April 2026 05:41:15 +0000 (0:00:00.752) 0:31:12.003 ******** 2026-04-11 05:41:48.791194 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791201 | orchestrator | 2026-04-11 05:41:48.791207 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:41:48.791214 | orchestrator | Saturday 11 April 2026 05:41:16 +0000 (0:00:00.788) 0:31:12.792 ******** 2026-04-11 05:41:48.791219 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791226 | orchestrator | 2026-04-11 05:41:48.791245 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:41:48.791252 | orchestrator | Saturday 11 April 2026 05:41:17 +0000 (0:00:00.810) 0:31:13.602 ******** 2026-04-11 05:41:48.791259 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791266 | orchestrator | 2026-04-11 05:41:48.791273 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:41:48.791280 | orchestrator | Saturday 11 April 2026 05:41:18 +0000 (0:00:00.804) 0:31:14.406 ******** 2026-04-11 05:41:48.791286 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791293 | orchestrator | 2026-04-11 05:41:48.791300 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:41:48.791307 | orchestrator | Saturday 11 April 2026 05:41:19 +0000 (0:00:00.882) 0:31:15.289 ******** 2026-04-11 05:41:48.791314 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.791320 | orchestrator | 2026-04-11 05:41:48.791329 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:41:48.791338 | orchestrator | Saturday 11 April 2026 05:41:20 +0000 (0:00:01.666) 0:31:16.955 ******** 2026-04-11 05:41:48.791348 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.791357 | orchestrator | 2026-04-11 05:41:48.791367 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:41:48.791377 | orchestrator | Saturday 11 April 2026 05:41:22 +0000 (0:00:02.056) 0:31:19.012 ******** 2026-04-11 05:41:48.791387 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-11 05:41:48.791397 | orchestrator | 2026-04-11 05:41:48.791407 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:41:48.791417 | orchestrator | Saturday 11 April 2026 05:41:24 +0000 (0:00:01.294) 0:31:20.306 ******** 2026-04-11 05:41:48.791427 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791436 | orchestrator | 2026-04-11 05:41:48.791445 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:41:48.791468 | orchestrator | Saturday 11 April 2026 05:41:25 +0000 (0:00:01.174) 0:31:21.481 ******** 2026-04-11 05:41:48.791475 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791482 | orchestrator | 2026-04-11 05:41:48.791488 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:41:48.791498 | orchestrator | Saturday 11 April 2026 05:41:26 +0000 (0:00:01.171) 0:31:22.652 ******** 2026-04-11 05:41:48.791520 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:41:48.791530 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:41:48.791540 | orchestrator | 2026-04-11 05:41:48.791550 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:41:48.791559 | orchestrator | Saturday 11 April 2026 05:41:28 +0000 (0:00:01.828) 0:31:24.481 ******** 2026-04-11 05:41:48.791568 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.791578 | orchestrator | 2026-04-11 05:41:48.791594 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:41:48.791603 | orchestrator | Saturday 11 April 2026 05:41:29 +0000 (0:00:01.581) 0:31:26.062 ******** 2026-04-11 05:41:48.791613 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791622 | orchestrator | 2026-04-11 05:41:48.791632 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:41:48.791641 | orchestrator | Saturday 11 April 2026 05:41:31 +0000 (0:00:01.285) 0:31:27.347 ******** 2026-04-11 05:41:48.791648 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791655 | orchestrator | 2026-04-11 05:41:48.791662 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:41:48.791669 | orchestrator | Saturday 11 April 2026 05:41:31 +0000 (0:00:00.802) 0:31:28.150 ******** 2026-04-11 05:41:48.791676 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791682 | orchestrator | 2026-04-11 05:41:48.791689 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:41:48.791696 | orchestrator | Saturday 11 April 2026 05:41:32 +0000 (0:00:00.790) 0:31:28.941 ******** 2026-04-11 05:41:48.791703 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-11 05:41:48.791710 | orchestrator | 2026-04-11 05:41:48.791717 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:41:48.791723 | orchestrator | Saturday 11 April 2026 05:41:33 +0000 (0:00:01.124) 0:31:30.066 ******** 2026-04-11 05:41:48.791730 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.791737 | orchestrator | 2026-04-11 05:41:48.791744 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:41:48.791751 | orchestrator | Saturday 11 April 2026 05:41:35 +0000 (0:00:01.744) 0:31:31.810 ******** 2026-04-11 05:41:48.791758 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:41:48.791765 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:41:48.791771 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:41:48.791778 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791785 | orchestrator | 2026-04-11 05:41:48.791792 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:41:48.791799 | orchestrator | Saturday 11 April 2026 05:41:36 +0000 (0:00:01.182) 0:31:32.993 ******** 2026-04-11 05:41:48.791806 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791813 | orchestrator | 2026-04-11 05:41:48.791819 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:41:48.791826 | orchestrator | Saturday 11 April 2026 05:41:37 +0000 (0:00:01.114) 0:31:34.107 ******** 2026-04-11 05:41:48.791833 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791840 | orchestrator | 2026-04-11 05:41:48.791847 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:41:48.791853 | orchestrator | Saturday 11 April 2026 05:41:39 +0000 (0:00:01.222) 0:31:35.329 ******** 2026-04-11 05:41:48.791860 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791867 | orchestrator | 2026-04-11 05:41:48.791878 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:41:48.791885 | orchestrator | Saturday 11 April 2026 05:41:40 +0000 (0:00:01.138) 0:31:36.468 ******** 2026-04-11 05:41:48.791891 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791898 | orchestrator | 2026-04-11 05:41:48.791905 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:41:48.791912 | orchestrator | Saturday 11 April 2026 05:41:41 +0000 (0:00:01.173) 0:31:37.641 ******** 2026-04-11 05:41:48.791919 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.791926 | orchestrator | 2026-04-11 05:41:48.791933 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:41:48.791940 | orchestrator | Saturday 11 April 2026 05:41:42 +0000 (0:00:00.774) 0:31:38.415 ******** 2026-04-11 05:41:48.791952 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.791959 | orchestrator | 2026-04-11 05:41:48.791966 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:41:48.791973 | orchestrator | Saturday 11 April 2026 05:41:44 +0000 (0:00:02.189) 0:31:40.605 ******** 2026-04-11 05:41:48.791980 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:41:48.791987 | orchestrator | 2026-04-11 05:41:48.791993 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:41:48.792000 | orchestrator | Saturday 11 April 2026 05:41:45 +0000 (0:00:00.758) 0:31:41.364 ******** 2026-04-11 05:41:48.792007 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-11 05:41:48.792014 | orchestrator | 2026-04-11 05:41:48.792020 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:41:48.792027 | orchestrator | Saturday 11 April 2026 05:41:46 +0000 (0:00:01.132) 0:31:42.497 ******** 2026-04-11 05:41:48.792034 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.792041 | orchestrator | 2026-04-11 05:41:48.792048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:41:48.792055 | orchestrator | Saturday 11 April 2026 05:41:47 +0000 (0:00:01.197) 0:31:43.695 ******** 2026-04-11 05:41:48.792062 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.792068 | orchestrator | 2026-04-11 05:41:48.792075 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:41:48.792082 | orchestrator | Saturday 11 April 2026 05:41:48 +0000 (0:00:01.126) 0:31:44.821 ******** 2026-04-11 05:41:48.792089 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:41:48.792096 | orchestrator | 2026-04-11 05:41:48.792107 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:42:23.488598 | orchestrator | Saturday 11 April 2026 05:41:49 +0000 (0:00:01.169) 0:31:45.990 ******** 2026-04-11 05:42:23.488719 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.488736 | orchestrator | 2026-04-11 05:42:23.488748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:42:23.488760 | orchestrator | Saturday 11 April 2026 05:41:50 +0000 (0:00:01.157) 0:31:47.147 ******** 2026-04-11 05:42:23.488770 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.488781 | orchestrator | 2026-04-11 05:42:23.488792 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:42:23.488804 | orchestrator | Saturday 11 April 2026 05:41:52 +0000 (0:00:01.133) 0:31:48.281 ******** 2026-04-11 05:42:23.488814 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.488825 | orchestrator | 2026-04-11 05:42:23.488836 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:42:23.488847 | orchestrator | Saturday 11 April 2026 05:41:53 +0000 (0:00:01.246) 0:31:49.528 ******** 2026-04-11 05:42:23.488858 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.488869 | orchestrator | 2026-04-11 05:42:23.488879 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:42:23.488890 | orchestrator | Saturday 11 April 2026 05:41:54 +0000 (0:00:01.139) 0:31:50.668 ******** 2026-04-11 05:42:23.488901 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.488912 | orchestrator | 2026-04-11 05:42:23.488922 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:42:23.488933 | orchestrator | Saturday 11 April 2026 05:41:55 +0000 (0:00:01.176) 0:31:51.844 ******** 2026-04-11 05:42:23.488944 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:42:23.488956 | orchestrator | 2026-04-11 05:42:23.488967 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:42:23.488978 | orchestrator | Saturday 11 April 2026 05:41:56 +0000 (0:00:00.817) 0:31:52.662 ******** 2026-04-11 05:42:23.488988 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-11 05:42:23.488999 | orchestrator | 2026-04-11 05:42:23.489010 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:42:23.489045 | orchestrator | Saturday 11 April 2026 05:41:57 +0000 (0:00:01.127) 0:31:53.790 ******** 2026-04-11 05:42:23.489057 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-11 05:42:23.489068 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-11 05:42:23.489081 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-11 05:42:23.489094 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-11 05:42:23.489106 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-11 05:42:23.489119 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-11 05:42:23.489131 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-11 05:42:23.489143 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:42:23.489155 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:42:23.489168 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:42:23.489180 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:42:23.489193 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:42:23.489222 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:42:23.489235 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:42:23.489247 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-11 05:42:23.489259 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-11 05:42:23.489272 | orchestrator | 2026-04-11 05:42:23.489285 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:42:23.489297 | orchestrator | Saturday 11 April 2026 05:42:03 +0000 (0:00:06.328) 0:32:00.118 ******** 2026-04-11 05:42:23.489310 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489323 | orchestrator | 2026-04-11 05:42:23.489335 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:42:23.489347 | orchestrator | Saturday 11 April 2026 05:42:04 +0000 (0:00:00.790) 0:32:00.909 ******** 2026-04-11 05:42:23.489359 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489372 | orchestrator | 2026-04-11 05:42:23.489385 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:42:23.489397 | orchestrator | Saturday 11 April 2026 05:42:05 +0000 (0:00:00.757) 0:32:01.666 ******** 2026-04-11 05:42:23.489409 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489422 | orchestrator | 2026-04-11 05:42:23.489456 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:42:23.489467 | orchestrator | Saturday 11 April 2026 05:42:06 +0000 (0:00:00.786) 0:32:02.452 ******** 2026-04-11 05:42:23.489478 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489488 | orchestrator | 2026-04-11 05:42:23.489499 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:42:23.489510 | orchestrator | Saturday 11 April 2026 05:42:07 +0000 (0:00:00.779) 0:32:03.232 ******** 2026-04-11 05:42:23.489520 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489531 | orchestrator | 2026-04-11 05:42:23.489541 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:42:23.489552 | orchestrator | Saturday 11 April 2026 05:42:07 +0000 (0:00:00.766) 0:32:03.998 ******** 2026-04-11 05:42:23.489563 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489573 | orchestrator | 2026-04-11 05:42:23.489584 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:42:23.489595 | orchestrator | Saturday 11 April 2026 05:42:08 +0000 (0:00:00.821) 0:32:04.820 ******** 2026-04-11 05:42:23.489605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489616 | orchestrator | 2026-04-11 05:42:23.489643 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:42:23.489655 | orchestrator | Saturday 11 April 2026 05:42:09 +0000 (0:00:00.836) 0:32:05.656 ******** 2026-04-11 05:42:23.489674 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489685 | orchestrator | 2026-04-11 05:42:23.489696 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:42:23.489706 | orchestrator | Saturday 11 April 2026 05:42:10 +0000 (0:00:00.827) 0:32:06.483 ******** 2026-04-11 05:42:23.489717 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489728 | orchestrator | 2026-04-11 05:42:23.489738 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:42:23.489748 | orchestrator | Saturday 11 April 2026 05:42:11 +0000 (0:00:00.792) 0:32:07.275 ******** 2026-04-11 05:42:23.489759 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489769 | orchestrator | 2026-04-11 05:42:23.489780 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:42:23.489791 | orchestrator | Saturday 11 April 2026 05:42:11 +0000 (0:00:00.805) 0:32:08.081 ******** 2026-04-11 05:42:23.489801 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489812 | orchestrator | 2026-04-11 05:42:23.489822 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:42:23.489833 | orchestrator | Saturday 11 April 2026 05:42:12 +0000 (0:00:00.789) 0:32:08.871 ******** 2026-04-11 05:42:23.489843 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489854 | orchestrator | 2026-04-11 05:42:23.489864 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:42:23.489875 | orchestrator | Saturday 11 April 2026 05:42:13 +0000 (0:00:00.810) 0:32:09.682 ******** 2026-04-11 05:42:23.489885 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489896 | orchestrator | 2026-04-11 05:42:23.489907 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:42:23.489917 | orchestrator | Saturday 11 April 2026 05:42:14 +0000 (0:00:00.934) 0:32:10.617 ******** 2026-04-11 05:42:23.489928 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489938 | orchestrator | 2026-04-11 05:42:23.489948 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:42:23.489959 | orchestrator | Saturday 11 April 2026 05:42:15 +0000 (0:00:00.747) 0:32:11.364 ******** 2026-04-11 05:42:23.489970 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.489980 | orchestrator | 2026-04-11 05:42:23.489991 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:42:23.490002 | orchestrator | Saturday 11 April 2026 05:42:16 +0000 (0:00:00.886) 0:32:12.250 ******** 2026-04-11 05:42:23.490012 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490081 | orchestrator | 2026-04-11 05:42:23.490092 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:42:23.490103 | orchestrator | Saturday 11 April 2026 05:42:16 +0000 (0:00:00.767) 0:32:13.018 ******** 2026-04-11 05:42:23.490113 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490124 | orchestrator | 2026-04-11 05:42:23.490135 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:42:23.490147 | orchestrator | Saturday 11 April 2026 05:42:17 +0000 (0:00:00.767) 0:32:13.785 ******** 2026-04-11 05:42:23.490158 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490168 | orchestrator | 2026-04-11 05:42:23.490179 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:42:23.490195 | orchestrator | Saturday 11 April 2026 05:42:18 +0000 (0:00:00.785) 0:32:14.570 ******** 2026-04-11 05:42:23.490206 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490217 | orchestrator | 2026-04-11 05:42:23.490227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:42:23.490238 | orchestrator | Saturday 11 April 2026 05:42:19 +0000 (0:00:00.854) 0:32:15.425 ******** 2026-04-11 05:42:23.490248 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490259 | orchestrator | 2026-04-11 05:42:23.490270 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:42:23.490288 | orchestrator | Saturday 11 April 2026 05:42:20 +0000 (0:00:00.817) 0:32:16.243 ******** 2026-04-11 05:42:23.490299 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490309 | orchestrator | 2026-04-11 05:42:23.490320 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:42:23.490330 | orchestrator | Saturday 11 April 2026 05:42:20 +0000 (0:00:00.809) 0:32:17.053 ******** 2026-04-11 05:42:23.490341 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:42:23.490352 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:42:23.490363 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:42:23.490373 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490384 | orchestrator | 2026-04-11 05:42:23.490395 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:42:23.490405 | orchestrator | Saturday 11 April 2026 05:42:21 +0000 (0:00:01.086) 0:32:18.139 ******** 2026-04-11 05:42:23.490416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:42:23.490427 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:42:23.490457 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:42:23.490468 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490479 | orchestrator | 2026-04-11 05:42:23.490489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:42:23.490500 | orchestrator | Saturday 11 April 2026 05:42:23 +0000 (0:00:01.134) 0:32:19.273 ******** 2026-04-11 05:42:23.490511 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-11 05:42:23.490521 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-11 05:42:23.490532 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-11 05:42:23.490543 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:42:23.490553 | orchestrator | 2026-04-11 05:42:23.490571 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:43:27.128950 | orchestrator | Saturday 11 April 2026 05:42:24 +0000 (0:00:01.078) 0:32:20.352 ******** 2026-04-11 05:43:27.129104 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.129132 | orchestrator | 2026-04-11 05:43:27.129151 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:43:27.129171 | orchestrator | Saturday 11 April 2026 05:42:24 +0000 (0:00:00.751) 0:32:21.104 ******** 2026-04-11 05:43:27.129189 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-11 05:43:27.129207 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.129225 | orchestrator | 2026-04-11 05:43:27.129242 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:43:27.129261 | orchestrator | Saturday 11 April 2026 05:42:25 +0000 (0:00:00.934) 0:32:22.038 ******** 2026-04-11 05:43:27.129280 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.129299 | orchestrator | 2026-04-11 05:43:27.129317 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:43:27.129337 | orchestrator | Saturday 11 April 2026 05:42:27 +0000 (0:00:01.430) 0:32:23.469 ******** 2026-04-11 05:43:27.129355 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:43:27.129374 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:43:27.129426 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-11 05:43:27.129445 | orchestrator | 2026-04-11 05:43:27.129464 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-11 05:43:27.129485 | orchestrator | Saturday 11 April 2026 05:42:28 +0000 (0:00:01.729) 0:32:25.198 ******** 2026-04-11 05:43:27.129506 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-04-11 05:43:27.129525 | orchestrator | 2026-04-11 05:43:27.129545 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-11 05:43:27.129600 | orchestrator | Saturday 11 April 2026 05:42:30 +0000 (0:00:01.116) 0:32:26.315 ******** 2026-04-11 05:43:27.129621 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.129641 | orchestrator | 2026-04-11 05:43:27.129662 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-11 05:43:27.129681 | orchestrator | Saturday 11 April 2026 05:42:31 +0000 (0:00:01.591) 0:32:27.906 ******** 2026-04-11 05:43:27.129700 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.129718 | orchestrator | 2026-04-11 05:43:27.129737 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-11 05:43:27.129755 | orchestrator | Saturday 11 April 2026 05:42:32 +0000 (0:00:01.123) 0:32:29.030 ******** 2026-04-11 05:43:27.129773 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:43:27.129792 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:43:27.129810 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:43:27.129830 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-04-11 05:43:27.129850 | orchestrator | 2026-04-11 05:43:27.129870 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-11 05:43:27.129889 | orchestrator | Saturday 11 April 2026 05:42:39 +0000 (0:00:07.108) 0:32:36.138 ******** 2026-04-11 05:43:27.129908 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.129926 | orchestrator | 2026-04-11 05:43:27.129943 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-11 05:43:27.129980 | orchestrator | Saturday 11 April 2026 05:42:41 +0000 (0:00:01.181) 0:32:37.320 ******** 2026-04-11 05:43:27.130000 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 05:43:27.130084 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-11 05:43:27.130099 | orchestrator | 2026-04-11 05:43:27.130111 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-11 05:43:27.130122 | orchestrator | Saturday 11 April 2026 05:42:44 +0000 (0:00:03.278) 0:32:40.598 ******** 2026-04-11 05:43:27.130132 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 05:43:27.130143 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-11 05:43:27.130154 | orchestrator | 2026-04-11 05:43:27.130165 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-11 05:43:27.130176 | orchestrator | Saturday 11 April 2026 05:42:46 +0000 (0:00:02.091) 0:32:42.689 ******** 2026-04-11 05:43:27.130186 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.130197 | orchestrator | 2026-04-11 05:43:27.130208 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-11 05:43:27.130218 | orchestrator | Saturday 11 April 2026 05:42:48 +0000 (0:00:01.571) 0:32:44.261 ******** 2026-04-11 05:43:27.130229 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.130240 | orchestrator | 2026-04-11 05:43:27.130250 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-11 05:43:27.130261 | orchestrator | Saturday 11 April 2026 05:42:48 +0000 (0:00:00.786) 0:32:45.048 ******** 2026-04-11 05:43:27.130271 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.130282 | orchestrator | 2026-04-11 05:43:27.130301 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-11 05:43:27.130318 | orchestrator | Saturday 11 April 2026 05:42:49 +0000 (0:00:00.799) 0:32:45.847 ******** 2026-04-11 05:43:27.130335 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-04-11 05:43:27.130353 | orchestrator | 2026-04-11 05:43:27.130369 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-11 05:43:27.130412 | orchestrator | Saturday 11 April 2026 05:42:50 +0000 (0:00:01.121) 0:32:46.969 ******** 2026-04-11 05:43:27.130433 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.130449 | orchestrator | 2026-04-11 05:43:27.130465 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-11 05:43:27.130497 | orchestrator | Saturday 11 April 2026 05:42:51 +0000 (0:00:01.157) 0:32:48.126 ******** 2026-04-11 05:43:27.130513 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.130530 | orchestrator | 2026-04-11 05:43:27.130579 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-11 05:43:27.130600 | orchestrator | Saturday 11 April 2026 05:42:53 +0000 (0:00:01.127) 0:32:49.253 ******** 2026-04-11 05:43:27.130617 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-04-11 05:43:27.130634 | orchestrator | 2026-04-11 05:43:27.130653 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-11 05:43:27.130671 | orchestrator | Saturday 11 April 2026 05:42:54 +0000 (0:00:01.281) 0:32:50.535 ******** 2026-04-11 05:43:27.130690 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.130708 | orchestrator | 2026-04-11 05:43:27.130727 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-11 05:43:27.130747 | orchestrator | Saturday 11 April 2026 05:42:56 +0000 (0:00:02.110) 0:32:52.645 ******** 2026-04-11 05:43:27.130767 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.130787 | orchestrator | 2026-04-11 05:43:27.130806 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-11 05:43:27.130826 | orchestrator | Saturday 11 April 2026 05:42:58 +0000 (0:00:01.940) 0:32:54.586 ******** 2026-04-11 05:43:27.130846 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.130864 | orchestrator | 2026-04-11 05:43:27.130880 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-11 05:43:27.130898 | orchestrator | Saturday 11 April 2026 05:43:00 +0000 (0:00:02.393) 0:32:56.979 ******** 2026-04-11 05:43:27.130917 | orchestrator | changed: [testbed-node-2] 2026-04-11 05:43:27.130936 | orchestrator | 2026-04-11 05:43:27.130954 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-11 05:43:27.130972 | orchestrator | Saturday 11 April 2026 05:43:04 +0000 (0:00:03.349) 0:33:00.329 ******** 2026-04-11 05:43:27.130991 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-11 05:43:27.131009 | orchestrator | 2026-04-11 05:43:27.131027 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-11 05:43:27.131044 | orchestrator | Saturday 11 April 2026 05:43:05 +0000 (0:00:01.533) 0:33:01.863 ******** 2026-04-11 05:43:27.131061 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:43:27.131078 | orchestrator | 2026-04-11 05:43:27.131095 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-11 05:43:27.131112 | orchestrator | Saturday 11 April 2026 05:43:08 +0000 (0:00:02.531) 0:33:04.394 ******** 2026-04-11 05:43:27.131129 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:43:27.131146 | orchestrator | 2026-04-11 05:43:27.131164 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-11 05:43:27.131182 | orchestrator | Saturday 11 April 2026 05:43:10 +0000 (0:00:02.315) 0:33:06.710 ******** 2026-04-11 05:43:27.131201 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.131220 | orchestrator | 2026-04-11 05:43:27.131240 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-11 05:43:27.131258 | orchestrator | Saturday 11 April 2026 05:43:11 +0000 (0:00:01.336) 0:33:08.046 ******** 2026-04-11 05:43:27.131278 | orchestrator | ok: [testbed-node-2] 2026-04-11 05:43:27.131297 | orchestrator | 2026-04-11 05:43:27.131317 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-11 05:43:27.131336 | orchestrator | Saturday 11 April 2026 05:43:13 +0000 (0:00:01.167) 0:33:09.213 ******** 2026-04-11 05:43:27.131356 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-11 05:43:27.131419 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-11 05:43:27.131442 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.131461 | orchestrator | 2026-04-11 05:43:27.131480 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-11 05:43:27.131517 | orchestrator | Saturday 11 April 2026 05:43:14 +0000 (0:00:01.823) 0:33:11.037 ******** 2026-04-11 05:43:27.131537 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-11 05:43:27.131556 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-11 05:43:27.131576 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-11 05:43:27.131594 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-11 05:43:27.131613 | orchestrator | skipping: [testbed-node-2] 2026-04-11 05:43:27.131633 | orchestrator | 2026-04-11 05:43:27.131653 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-04-11 05:43:27.131674 | orchestrator | 2026-04-11 05:43:27.131694 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:43:27.131714 | orchestrator | Saturday 11 April 2026 05:43:17 +0000 (0:00:02.677) 0:33:13.715 ******** 2026-04-11 05:43:27.131734 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:43:27.131754 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:43:27.131775 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:43:27.131794 | orchestrator | 2026-04-11 05:43:27.131814 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:43:27.131834 | orchestrator | Saturday 11 April 2026 05:43:19 +0000 (0:00:01.730) 0:33:15.446 ******** 2026-04-11 05:43:27.131852 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:43:27.131869 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:43:27.131886 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:43:27.131903 | orchestrator | 2026-04-11 05:43:27.131918 | orchestrator | TASK [Get pool list] *********************************************************** 2026-04-11 05:43:27.131933 | orchestrator | Saturday 11 April 2026 05:43:20 +0000 (0:00:01.652) 0:33:17.098 ******** 2026-04-11 05:43:27.131948 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:43:27.131965 | orchestrator | 2026-04-11 05:43:27.131983 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-04-11 05:43:27.132000 | orchestrator | Saturday 11 April 2026 05:43:23 +0000 (0:00:03.082) 0:33:20.181 ******** 2026-04-11 05:43:27.132016 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:43:27.132034 | orchestrator | 2026-04-11 05:43:27.132051 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-04-11 05:43:27.132068 | orchestrator | Saturday 11 April 2026 05:43:26 +0000 (0:00:02.943) 0:33:23.125 ******** 2026-04-11 05:43:27.132118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-04-11T03:00:36.700443+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:27.572931 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-04-11T03:01:50.721110+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:27.573039 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-04-11T03:01:54.802769+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:27.573112 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-04-11T03:02:57.634195+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '61', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:27.573131 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-04-11T03:03:03.839059+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '63', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:27.573166 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-04-11T03:03:10.079860+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '63', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:28.376099 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-04-11T03:03:16.299467+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '153', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '65', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:28.376242 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-04-11T03:03:22.180934+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '68', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '65', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:28.376281 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-04-11T03:03:34.735423+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '107', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '103', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:28.376317 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-04-11T03:04:24.303561+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '95', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 95, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:43:28.376340 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-04-11T03:04:33.118668+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '105', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 105, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:44:47.977526 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-04-11T03:04:42.676441+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '160', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 160, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:44:47.977673 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-04-11T03:04:52.264769+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '122', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 122, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:44:47.977724 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-04-11T03:05:01.713445+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '131', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 131, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-11 05:44:47.977739 | orchestrator | 2026-04-11 05:44:47.977752 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-04-11 05:44:47.977764 | orchestrator | Saturday 11 April 2026 05:43:29 +0000 (0:00:02.954) 0:33:26.079 ******** 2026-04-11 05:44:47.977776 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:44:47.977787 | orchestrator | 2026-04-11 05:44:47.977798 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-04-11 05:44:47.977808 | orchestrator | Saturday 11 April 2026 05:43:32 +0000 (0:00:02.948) 0:33:29.027 ******** 2026-04-11 05:44:47.977819 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-11 05:44:47.977832 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-11 05:44:47.977842 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-11 05:44:47.977853 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-11 05:44:47.977865 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-11 05:44:47.977876 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-11 05:44:47.977894 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-11 05:44:47.977905 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-11 05:44:47.977915 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-11 05:44:47.977926 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-11 05:44:47.977936 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-11 05:44:47.977947 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-11 05:44:47.977957 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-11 05:44:47.977968 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-11 05:44:47.977979 | orchestrator | 2026-04-11 05:44:47.977990 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-04-11 05:44:47.978008 | orchestrator | Saturday 11 April 2026 05:44:47 +0000 (0:01:15.151) 0:34:44.179 ******** 2026-04-11 05:45:15.361921 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-11 05:45:15.362008 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-11 05:45:15.362058 | orchestrator | 2026-04-11 05:45:15.362066 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-11 05:45:15.362072 | orchestrator | 2026-04-11 05:45:15.362077 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:45:15.362083 | orchestrator | Saturday 11 April 2026 05:44:53 +0000 (0:00:05.831) 0:34:50.010 ******** 2026-04-11 05:45:15.362088 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-11 05:45:15.362093 | orchestrator | 2026-04-11 05:45:15.362099 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:45:15.362115 | orchestrator | Saturday 11 April 2026 05:44:54 +0000 (0:00:01.116) 0:34:51.127 ******** 2026-04-11 05:45:15.362121 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:15.362128 | orchestrator | 2026-04-11 05:45:15.362133 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:45:15.362138 | orchestrator | Saturday 11 April 2026 05:44:56 +0000 (0:00:01.487) 0:34:52.614 ******** 2026-04-11 05:45:15.362143 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:15.362148 | orchestrator | 2026-04-11 05:45:15.362153 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:45:15.362158 | orchestrator | Saturday 11 April 2026 05:44:57 +0000 (0:00:01.130) 0:34:53.745 ******** 2026-04-11 05:45:15.362163 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:15.362169 | orchestrator | 2026-04-11 05:45:15.362174 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:45:15.362179 | orchestrator | Saturday 11 April 2026 05:44:59 +0000 (0:00:01.560) 0:34:55.306 ******** 2026-04-11 05:45:15.362184 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:15.362189 | orchestrator | 2026-04-11 05:45:15.362194 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:45:15.362199 | orchestrator | Saturday 11 April 2026 05:45:00 +0000 (0:00:01.250) 0:34:56.556 ******** 2026-04-11 05:45:15.362204 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:15.362209 | orchestrator | 2026-04-11 05:45:15.362214 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:45:15.362219 | orchestrator | Saturday 11 April 2026 05:45:01 +0000 (0:00:01.180) 0:34:57.737 ******** 2026-04-11 05:45:15.362224 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:15.362229 | orchestrator | 2026-04-11 05:45:15.362234 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:45:15.362239 | orchestrator | Saturday 11 April 2026 05:45:02 +0000 (0:00:01.144) 0:34:58.881 ******** 2026-04-11 05:45:15.362261 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:15.362268 | orchestrator | 2026-04-11 05:45:15.362273 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:45:15.362278 | orchestrator | Saturday 11 April 2026 05:45:03 +0000 (0:00:01.152) 0:35:00.034 ******** 2026-04-11 05:45:15.362283 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:15.362288 | orchestrator | 2026-04-11 05:45:15.362293 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:45:15.362298 | orchestrator | Saturday 11 April 2026 05:45:04 +0000 (0:00:01.126) 0:35:01.160 ******** 2026-04-11 05:45:15.362303 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:45:15.362308 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:45:15.362313 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:45:15.362318 | orchestrator | 2026-04-11 05:45:15.362323 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:45:15.362375 | orchestrator | Saturday 11 April 2026 05:45:06 +0000 (0:00:01.707) 0:35:02.867 ******** 2026-04-11 05:45:15.362380 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:15.362385 | orchestrator | 2026-04-11 05:45:15.362390 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:45:15.362396 | orchestrator | Saturday 11 April 2026 05:45:07 +0000 (0:00:01.212) 0:35:04.080 ******** 2026-04-11 05:45:15.362401 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:45:15.362406 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:45:15.362411 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:45:15.362416 | orchestrator | 2026-04-11 05:45:15.362421 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:45:15.362426 | orchestrator | Saturday 11 April 2026 05:45:10 +0000 (0:00:02.819) 0:35:06.899 ******** 2026-04-11 05:45:15.362431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 05:45:15.362437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 05:45:15.362442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 05:45:15.362447 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:15.362452 | orchestrator | 2026-04-11 05:45:15.362457 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:45:15.362462 | orchestrator | Saturday 11 April 2026 05:45:12 +0000 (0:00:01.435) 0:35:08.334 ******** 2026-04-11 05:45:15.362470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:45:15.362488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:45:15.362494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:45:15.362499 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:15.362505 | orchestrator | 2026-04-11 05:45:15.362510 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:45:15.362515 | orchestrator | Saturday 11 April 2026 05:45:14 +0000 (0:00:01.985) 0:35:10.321 ******** 2026-04-11 05:45:15.362525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:15.362539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:15.362544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:15.362550 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:15.362555 | orchestrator | 2026-04-11 05:45:15.362560 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:45:15.362565 | orchestrator | Saturday 11 April 2026 05:45:15 +0000 (0:00:01.134) 0:35:11.455 ******** 2026-04-11 05:45:15.362571 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:45:08.414157', 'end': '2026-04-11 05:45:08.464303', 'delta': '0:00:00.050146', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:45:15.362580 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:45:08.955168', 'end': '2026-04-11 05:45:09.004332', 'delta': '0:00:00.049164', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:45:15.362590 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:45:09.504315', 'end': '2026-04-11 05:45:09.551049', 'delta': '0:00:00.046734', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:45:34.347893 | orchestrator | 2026-04-11 05:45:34.348002 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:45:34.348018 | orchestrator | Saturday 11 April 2026 05:45:16 +0000 (0:00:01.206) 0:35:12.662 ******** 2026-04-11 05:45:34.348049 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:34.348061 | orchestrator | 2026-04-11 05:45:34.348071 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:45:34.348081 | orchestrator | Saturday 11 April 2026 05:45:17 +0000 (0:00:01.216) 0:35:13.879 ******** 2026-04-11 05:45:34.348091 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:34.348102 | orchestrator | 2026-04-11 05:45:34.348126 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:45:34.348136 | orchestrator | Saturday 11 April 2026 05:45:19 +0000 (0:00:01.797) 0:35:15.676 ******** 2026-04-11 05:45:34.348145 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:34.348155 | orchestrator | 2026-04-11 05:45:34.348164 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:45:34.348174 | orchestrator | Saturday 11 April 2026 05:45:20 +0000 (0:00:01.147) 0:35:16.823 ******** 2026-04-11 05:45:34.348184 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:45:34.348195 | orchestrator | 2026-04-11 05:45:34.348204 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:45:34.348214 | orchestrator | Saturday 11 April 2026 05:45:22 +0000 (0:00:01.942) 0:35:18.765 ******** 2026-04-11 05:45:34.348223 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:34.348233 | orchestrator | 2026-04-11 05:45:34.348243 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:45:34.348252 | orchestrator | Saturday 11 April 2026 05:45:23 +0000 (0:00:01.156) 0:35:19.922 ******** 2026-04-11 05:45:34.348261 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:34.348271 | orchestrator | 2026-04-11 05:45:34.348281 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:45:34.348290 | orchestrator | Saturday 11 April 2026 05:45:24 +0000 (0:00:01.101) 0:35:21.023 ******** 2026-04-11 05:45:34.348300 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:34.348309 | orchestrator | 2026-04-11 05:45:34.348364 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:45:34.348374 | orchestrator | Saturday 11 April 2026 05:45:26 +0000 (0:00:01.225) 0:35:22.249 ******** 2026-04-11 05:45:34.348384 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:34.348393 | orchestrator | 2026-04-11 05:45:34.348403 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:45:34.348413 | orchestrator | Saturday 11 April 2026 05:45:27 +0000 (0:00:01.092) 0:35:23.342 ******** 2026-04-11 05:45:34.348422 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:34.348433 | orchestrator | 2026-04-11 05:45:34.348444 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:45:34.348456 | orchestrator | Saturday 11 April 2026 05:45:28 +0000 (0:00:01.123) 0:35:24.465 ******** 2026-04-11 05:45:34.348467 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:34.348478 | orchestrator | 2026-04-11 05:45:34.348489 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:45:34.348500 | orchestrator | Saturday 11 April 2026 05:45:29 +0000 (0:00:01.178) 0:35:25.643 ******** 2026-04-11 05:45:34.348511 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:34.348524 | orchestrator | 2026-04-11 05:45:34.348535 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:45:34.348546 | orchestrator | Saturday 11 April 2026 05:45:30 +0000 (0:00:01.173) 0:35:26.817 ******** 2026-04-11 05:45:34.348558 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:34.348569 | orchestrator | 2026-04-11 05:45:34.348581 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:45:34.348592 | orchestrator | Saturday 11 April 2026 05:45:31 +0000 (0:00:01.227) 0:35:28.045 ******** 2026-04-11 05:45:34.348604 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:34.348615 | orchestrator | 2026-04-11 05:45:34.348626 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:45:34.348638 | orchestrator | Saturday 11 April 2026 05:45:32 +0000 (0:00:01.130) 0:35:29.175 ******** 2026-04-11 05:45:34.348657 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:45:34.348668 | orchestrator | 2026-04-11 05:45:34.348679 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:45:34.348690 | orchestrator | Saturday 11 April 2026 05:45:34 +0000 (0:00:01.200) 0:35:30.376 ******** 2026-04-11 05:45:34.348704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:45:34.348737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'uuids': ['5687e399-36a2-4cfe-ae2f-5c9610714106'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG']}})  2026-04-11 05:45:34.348758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d9c4f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:45:34.348770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003']}})  2026-04-11 05:45:34.348781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:45:34.348792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:45:34.348804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:45:34.348822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:45:34.348832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ', 'dm-uuid-CRYPT-LUKS2-4ce930e6d90647c5bf5f978d8b977bd0-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:45:34.348851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:45:35.718190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'uuids': ['4ce930e6-d906-47c5-bf5f-978d8b977bd0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ']}})  2026-04-11 05:45:35.718291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200']}})  2026-04-11 05:45:35.718306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:45:35.718375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f54fce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:45:35.718436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:45:35.718449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:45:35.718460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG', 'dm-uuid-CRYPT-LUKS2-5687e39936a24cfeae2f5c9610714106-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:45:35.718472 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:45:35.718484 | orchestrator | 2026-04-11 05:45:35.718494 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:45:35.718505 | orchestrator | Saturday 11 April 2026 05:45:35 +0000 (0:00:01.414) 0:35:31.790 ******** 2026-04-11 05:45:35.718517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.718536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'uuids': ['5687e399-36a2-4cfe-ae2f-5c9610714106'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.718547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d9c4f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.718571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ', 'dm-uuid-CRYPT-LUKS2-4ce930e6d90647c5bf5f978d8b977bd0-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834833 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'uuids': ['4ce930e6-d906-47c5-bf5f-978d8b977bd0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:45:35.834919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f54fce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:46:04.536977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:46:04.537130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:46:04.537152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG', 'dm-uuid-CRYPT-LUKS2-5687e39936a24cfeae2f5c9610714106-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:46:04.537167 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.537181 | orchestrator | 2026-04-11 05:46:04.537193 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:46:04.537206 | orchestrator | Saturday 11 April 2026 05:45:36 +0000 (0:00:01.390) 0:35:33.181 ******** 2026-04-11 05:46:04.537217 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:04.537228 | orchestrator | 2026-04-11 05:46:04.537239 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:46:04.537250 | orchestrator | Saturday 11 April 2026 05:45:38 +0000 (0:00:01.555) 0:35:34.736 ******** 2026-04-11 05:46:04.537261 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:04.537272 | orchestrator | 2026-04-11 05:46:04.537283 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:46:04.537294 | orchestrator | Saturday 11 April 2026 05:45:39 +0000 (0:00:01.110) 0:35:35.847 ******** 2026-04-11 05:46:04.537369 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:04.537383 | orchestrator | 2026-04-11 05:46:04.537394 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:46:04.537405 | orchestrator | Saturday 11 April 2026 05:45:41 +0000 (0:00:01.558) 0:35:37.406 ******** 2026-04-11 05:46:04.537416 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.537428 | orchestrator | 2026-04-11 05:46:04.537456 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:46:04.537468 | orchestrator | Saturday 11 April 2026 05:45:42 +0000 (0:00:01.098) 0:35:38.505 ******** 2026-04-11 05:46:04.537481 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.537494 | orchestrator | 2026-04-11 05:46:04.537506 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:46:04.537520 | orchestrator | Saturday 11 April 2026 05:45:43 +0000 (0:00:01.252) 0:35:39.757 ******** 2026-04-11 05:46:04.537532 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.537545 | orchestrator | 2026-04-11 05:46:04.537587 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:46:04.537615 | orchestrator | Saturday 11 April 2026 05:45:44 +0000 (0:00:01.150) 0:35:40.907 ******** 2026-04-11 05:46:04.537638 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-11 05:46:04.537656 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-11 05:46:04.537675 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-11 05:46:04.537692 | orchestrator | 2026-04-11 05:46:04.537711 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:46:04.537728 | orchestrator | Saturday 11 April 2026 05:45:46 +0000 (0:00:02.003) 0:35:42.910 ******** 2026-04-11 05:46:04.537748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 05:46:04.537766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 05:46:04.537784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 05:46:04.537803 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.537821 | orchestrator | 2026-04-11 05:46:04.537839 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:46:04.537857 | orchestrator | Saturday 11 April 2026 05:45:47 +0000 (0:00:01.203) 0:35:44.114 ******** 2026-04-11 05:46:04.537901 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-11 05:46:04.537923 | orchestrator | 2026-04-11 05:46:04.537941 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:46:04.537963 | orchestrator | Saturday 11 April 2026 05:45:49 +0000 (0:00:01.158) 0:35:45.273 ******** 2026-04-11 05:46:04.537976 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.537988 | orchestrator | 2026-04-11 05:46:04.537999 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:46:04.538010 | orchestrator | Saturday 11 April 2026 05:45:50 +0000 (0:00:01.181) 0:35:46.455 ******** 2026-04-11 05:46:04.538090 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.538102 | orchestrator | 2026-04-11 05:46:04.538113 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:46:04.538123 | orchestrator | Saturday 11 April 2026 05:45:51 +0000 (0:00:01.238) 0:35:47.693 ******** 2026-04-11 05:46:04.538134 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.538145 | orchestrator | 2026-04-11 05:46:04.538156 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:46:04.538167 | orchestrator | Saturday 11 April 2026 05:45:52 +0000 (0:00:01.142) 0:35:48.835 ******** 2026-04-11 05:46:04.538178 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:04.538189 | orchestrator | 2026-04-11 05:46:04.538199 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:46:04.538210 | orchestrator | Saturday 11 April 2026 05:45:53 +0000 (0:00:01.223) 0:35:50.059 ******** 2026-04-11 05:46:04.538221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:46:04.538232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:46:04.538243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:46:04.538254 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.538265 | orchestrator | 2026-04-11 05:46:04.538275 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:46:04.538286 | orchestrator | Saturday 11 April 2026 05:45:55 +0000 (0:00:01.401) 0:35:51.460 ******** 2026-04-11 05:46:04.538297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:46:04.538340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:46:04.538361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:46:04.538380 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.538398 | orchestrator | 2026-04-11 05:46:04.538412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:46:04.538438 | orchestrator | Saturday 11 April 2026 05:45:56 +0000 (0:00:01.428) 0:35:52.889 ******** 2026-04-11 05:46:04.538449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:46:04.538460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:46:04.538471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:46:04.538482 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:04.538492 | orchestrator | 2026-04-11 05:46:04.538503 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:46:04.538514 | orchestrator | Saturday 11 April 2026 05:45:58 +0000 (0:00:01.417) 0:35:54.306 ******** 2026-04-11 05:46:04.538525 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:04.538536 | orchestrator | 2026-04-11 05:46:04.538547 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:46:04.538558 | orchestrator | Saturday 11 April 2026 05:45:59 +0000 (0:00:01.182) 0:35:55.489 ******** 2026-04-11 05:46:04.538569 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 05:46:04.538580 | orchestrator | 2026-04-11 05:46:04.538591 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:46:04.538602 | orchestrator | Saturday 11 April 2026 05:46:00 +0000 (0:00:01.337) 0:35:56.826 ******** 2026-04-11 05:46:04.538613 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:46:04.538623 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:46:04.538642 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:46:04.538653 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 05:46:04.538665 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:46:04.538676 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:46:04.538686 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:46:04.538697 | orchestrator | 2026-04-11 05:46:04.538708 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:46:04.538719 | orchestrator | Saturday 11 April 2026 05:46:02 +0000 (0:00:02.182) 0:35:59.008 ******** 2026-04-11 05:46:04.538729 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:46:04.538740 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:46:04.538751 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:46:04.538762 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 05:46:04.538773 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:46:04.538784 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:46:04.538795 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:46:04.538806 | orchestrator | 2026-04-11 05:46:04.538828 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-11 05:46:58.235792 | orchestrator | Saturday 11 April 2026 05:46:05 +0000 (0:00:02.649) 0:36:01.658 ******** 2026-04-11 05:46:58.235906 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.235923 | orchestrator | 2026-04-11 05:46:58.235936 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-11 05:46:58.235947 | orchestrator | Saturday 11 April 2026 05:46:06 +0000 (0:00:01.432) 0:36:03.090 ******** 2026-04-11 05:46:58.235958 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.235970 | orchestrator | 2026-04-11 05:46:58.235981 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-11 05:46:58.235992 | orchestrator | Saturday 11 April 2026 05:46:08 +0000 (0:00:01.151) 0:36:04.242 ******** 2026-04-11 05:46:58.236004 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236067 | orchestrator | 2026-04-11 05:46:58.236080 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-11 05:46:58.236091 | orchestrator | Saturday 11 April 2026 05:46:09 +0000 (0:00:01.702) 0:36:05.945 ******** 2026-04-11 05:46:58.236102 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-11 05:46:58.236115 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-11 05:46:58.236126 | orchestrator | 2026-04-11 05:46:58.236137 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:46:58.236148 | orchestrator | Saturday 11 April 2026 05:46:13 +0000 (0:00:04.066) 0:36:10.011 ******** 2026-04-11 05:46:58.236158 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-11 05:46:58.236171 | orchestrator | 2026-04-11 05:46:58.236182 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:46:58.236192 | orchestrator | Saturday 11 April 2026 05:46:14 +0000 (0:00:01.156) 0:36:11.167 ******** 2026-04-11 05:46:58.236203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-11 05:46:58.236214 | orchestrator | 2026-04-11 05:46:58.236225 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:46:58.236236 | orchestrator | Saturday 11 April 2026 05:46:16 +0000 (0:00:01.122) 0:36:12.290 ******** 2026-04-11 05:46:58.236247 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.236258 | orchestrator | 2026-04-11 05:46:58.236269 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:46:58.236279 | orchestrator | Saturday 11 April 2026 05:46:17 +0000 (0:00:01.102) 0:36:13.393 ******** 2026-04-11 05:46:58.236313 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236324 | orchestrator | 2026-04-11 05:46:58.236337 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:46:58.236350 | orchestrator | Saturday 11 April 2026 05:46:18 +0000 (0:00:01.504) 0:36:14.897 ******** 2026-04-11 05:46:58.236363 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236375 | orchestrator | 2026-04-11 05:46:58.236387 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:46:58.236400 | orchestrator | Saturday 11 April 2026 05:46:20 +0000 (0:00:01.486) 0:36:16.384 ******** 2026-04-11 05:46:58.236412 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236424 | orchestrator | 2026-04-11 05:46:58.236436 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:46:58.236448 | orchestrator | Saturday 11 April 2026 05:46:21 +0000 (0:00:01.695) 0:36:18.080 ******** 2026-04-11 05:46:58.236462 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.236475 | orchestrator | 2026-04-11 05:46:58.236487 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:46:58.236499 | orchestrator | Saturday 11 April 2026 05:46:22 +0000 (0:00:01.122) 0:36:19.202 ******** 2026-04-11 05:46:58.236511 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.236524 | orchestrator | 2026-04-11 05:46:58.236536 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:46:58.236549 | orchestrator | Saturday 11 April 2026 05:46:24 +0000 (0:00:01.252) 0:36:20.455 ******** 2026-04-11 05:46:58.236561 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.236574 | orchestrator | 2026-04-11 05:46:58.236586 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:46:58.236612 | orchestrator | Saturday 11 April 2026 05:46:25 +0000 (0:00:01.127) 0:36:21.583 ******** 2026-04-11 05:46:58.236623 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236634 | orchestrator | 2026-04-11 05:46:58.236645 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:46:58.236656 | orchestrator | Saturday 11 April 2026 05:46:26 +0000 (0:00:01.531) 0:36:23.114 ******** 2026-04-11 05:46:58.236667 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236677 | orchestrator | 2026-04-11 05:46:58.236688 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:46:58.236708 | orchestrator | Saturday 11 April 2026 05:46:28 +0000 (0:00:01.611) 0:36:24.726 ******** 2026-04-11 05:46:58.236719 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.236730 | orchestrator | 2026-04-11 05:46:58.236740 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:46:58.236751 | orchestrator | Saturday 11 April 2026 05:46:29 +0000 (0:00:01.182) 0:36:25.909 ******** 2026-04-11 05:46:58.236762 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.236772 | orchestrator | 2026-04-11 05:46:58.236783 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:46:58.236793 | orchestrator | Saturday 11 April 2026 05:46:30 +0000 (0:00:01.139) 0:36:27.048 ******** 2026-04-11 05:46:58.236804 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236815 | orchestrator | 2026-04-11 05:46:58.236826 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:46:58.236836 | orchestrator | Saturday 11 April 2026 05:46:31 +0000 (0:00:01.145) 0:36:28.194 ******** 2026-04-11 05:46:58.236847 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236858 | orchestrator | 2026-04-11 05:46:58.236868 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:46:58.236879 | orchestrator | Saturday 11 April 2026 05:46:33 +0000 (0:00:01.145) 0:36:29.339 ******** 2026-04-11 05:46:58.236890 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.236901 | orchestrator | 2026-04-11 05:46:58.236928 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:46:58.236940 | orchestrator | Saturday 11 April 2026 05:46:34 +0000 (0:00:01.146) 0:36:30.485 ******** 2026-04-11 05:46:58.236951 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.236962 | orchestrator | 2026-04-11 05:46:58.236973 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:46:58.236984 | orchestrator | Saturday 11 April 2026 05:46:35 +0000 (0:00:01.142) 0:36:31.628 ******** 2026-04-11 05:46:58.236994 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237005 | orchestrator | 2026-04-11 05:46:58.237016 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:46:58.237027 | orchestrator | Saturday 11 April 2026 05:46:36 +0000 (0:00:01.133) 0:36:32.762 ******** 2026-04-11 05:46:58.237037 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237048 | orchestrator | 2026-04-11 05:46:58.237059 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:46:58.237070 | orchestrator | Saturday 11 April 2026 05:46:37 +0000 (0:00:01.127) 0:36:33.889 ******** 2026-04-11 05:46:58.237080 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.237091 | orchestrator | 2026-04-11 05:46:58.237102 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:46:58.237113 | orchestrator | Saturday 11 April 2026 05:46:38 +0000 (0:00:01.149) 0:36:35.039 ******** 2026-04-11 05:46:58.237123 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.237134 | orchestrator | 2026-04-11 05:46:58.237145 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:46:58.237156 | orchestrator | Saturday 11 April 2026 05:46:39 +0000 (0:00:01.138) 0:36:36.178 ******** 2026-04-11 05:46:58.237166 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237177 | orchestrator | 2026-04-11 05:46:58.237188 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:46:58.237199 | orchestrator | Saturday 11 April 2026 05:46:41 +0000 (0:00:01.232) 0:36:37.410 ******** 2026-04-11 05:46:58.237209 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237220 | orchestrator | 2026-04-11 05:46:58.237230 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:46:58.237241 | orchestrator | Saturday 11 April 2026 05:46:42 +0000 (0:00:01.124) 0:36:38.535 ******** 2026-04-11 05:46:58.237252 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237263 | orchestrator | 2026-04-11 05:46:58.237273 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:46:58.237324 | orchestrator | Saturday 11 April 2026 05:46:43 +0000 (0:00:01.156) 0:36:39.691 ******** 2026-04-11 05:46:58.237336 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237347 | orchestrator | 2026-04-11 05:46:58.237358 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:46:58.237369 | orchestrator | Saturday 11 April 2026 05:46:44 +0000 (0:00:01.175) 0:36:40.866 ******** 2026-04-11 05:46:58.237379 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237390 | orchestrator | 2026-04-11 05:46:58.237401 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:46:58.237412 | orchestrator | Saturday 11 April 2026 05:46:45 +0000 (0:00:01.116) 0:36:41.983 ******** 2026-04-11 05:46:58.237423 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237433 | orchestrator | 2026-04-11 05:46:58.237444 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:46:58.237455 | orchestrator | Saturday 11 April 2026 05:46:47 +0000 (0:00:01.257) 0:36:43.241 ******** 2026-04-11 05:46:58.237466 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237490 | orchestrator | 2026-04-11 05:46:58.237501 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:46:58.237512 | orchestrator | Saturday 11 April 2026 05:46:48 +0000 (0:00:01.181) 0:36:44.423 ******** 2026-04-11 05:46:58.237523 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237534 | orchestrator | 2026-04-11 05:46:58.237544 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:46:58.237555 | orchestrator | Saturday 11 April 2026 05:46:49 +0000 (0:00:01.134) 0:36:45.557 ******** 2026-04-11 05:46:58.237566 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237577 | orchestrator | 2026-04-11 05:46:58.237593 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:46:58.237605 | orchestrator | Saturday 11 April 2026 05:46:50 +0000 (0:00:01.174) 0:36:46.732 ******** 2026-04-11 05:46:58.237615 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237626 | orchestrator | 2026-04-11 05:46:58.237637 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:46:58.237648 | orchestrator | Saturday 11 April 2026 05:46:51 +0000 (0:00:01.095) 0:36:47.828 ******** 2026-04-11 05:46:58.237658 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237669 | orchestrator | 2026-04-11 05:46:58.237680 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:46:58.237691 | orchestrator | Saturday 11 April 2026 05:46:52 +0000 (0:00:01.140) 0:36:48.968 ******** 2026-04-11 05:46:58.237701 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:46:58.237712 | orchestrator | 2026-04-11 05:46:58.237723 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:46:58.237734 | orchestrator | Saturday 11 April 2026 05:46:53 +0000 (0:00:01.115) 0:36:50.084 ******** 2026-04-11 05:46:58.237744 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.237755 | orchestrator | 2026-04-11 05:46:58.237766 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:46:58.237777 | orchestrator | Saturday 11 April 2026 05:46:55 +0000 (0:00:01.869) 0:36:51.953 ******** 2026-04-11 05:46:58.237787 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:46:58.237798 | orchestrator | 2026-04-11 05:46:58.237809 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:46:58.237820 | orchestrator | Saturday 11 April 2026 05:46:57 +0000 (0:00:02.241) 0:36:54.194 ******** 2026-04-11 05:46:58.237830 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-11 05:46:58.237841 | orchestrator | 2026-04-11 05:46:58.237858 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:47:45.475083 | orchestrator | Saturday 11 April 2026 05:46:59 +0000 (0:00:01.129) 0:36:55.324 ******** 2026-04-11 05:47:45.475201 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475242 | orchestrator | 2026-04-11 05:47:45.475256 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:47:45.475334 | orchestrator | Saturday 11 April 2026 05:47:00 +0000 (0:00:01.166) 0:36:56.491 ******** 2026-04-11 05:47:45.475348 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475359 | orchestrator | 2026-04-11 05:47:45.475370 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:47:45.475381 | orchestrator | Saturday 11 April 2026 05:47:01 +0000 (0:00:01.130) 0:36:57.621 ******** 2026-04-11 05:47:45.475391 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:47:45.475402 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:47:45.475414 | orchestrator | 2026-04-11 05:47:45.475425 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:47:45.475435 | orchestrator | Saturday 11 April 2026 05:47:03 +0000 (0:00:01.805) 0:36:59.426 ******** 2026-04-11 05:47:45.475446 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:47:45.475457 | orchestrator | 2026-04-11 05:47:45.475469 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:47:45.475480 | orchestrator | Saturday 11 April 2026 05:47:04 +0000 (0:00:01.469) 0:37:00.896 ******** 2026-04-11 05:47:45.475491 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475502 | orchestrator | 2026-04-11 05:47:45.475512 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:47:45.475523 | orchestrator | Saturday 11 April 2026 05:47:05 +0000 (0:00:01.148) 0:37:02.044 ******** 2026-04-11 05:47:45.475534 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475544 | orchestrator | 2026-04-11 05:47:45.475555 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:47:45.475565 | orchestrator | Saturday 11 April 2026 05:47:06 +0000 (0:00:01.145) 0:37:03.190 ******** 2026-04-11 05:47:45.475576 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475586 | orchestrator | 2026-04-11 05:47:45.475597 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:47:45.475609 | orchestrator | Saturday 11 April 2026 05:47:08 +0000 (0:00:01.158) 0:37:04.349 ******** 2026-04-11 05:47:45.475621 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-11 05:47:45.475635 | orchestrator | 2026-04-11 05:47:45.475646 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:47:45.475659 | orchestrator | Saturday 11 April 2026 05:47:09 +0000 (0:00:01.117) 0:37:05.466 ******** 2026-04-11 05:47:45.475671 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:47:45.475684 | orchestrator | 2026-04-11 05:47:45.475697 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:47:45.475709 | orchestrator | Saturday 11 April 2026 05:47:11 +0000 (0:00:01.747) 0:37:07.214 ******** 2026-04-11 05:47:45.475721 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:47:45.475734 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:47:45.475746 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:47:45.475758 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475771 | orchestrator | 2026-04-11 05:47:45.475783 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:47:45.475795 | orchestrator | Saturday 11 April 2026 05:47:12 +0000 (0:00:01.196) 0:37:08.410 ******** 2026-04-11 05:47:45.475808 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475820 | orchestrator | 2026-04-11 05:47:45.475833 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:47:45.475845 | orchestrator | Saturday 11 April 2026 05:47:13 +0000 (0:00:01.146) 0:37:09.556 ******** 2026-04-11 05:47:45.475857 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475870 | orchestrator | 2026-04-11 05:47:45.475906 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:47:45.475920 | orchestrator | Saturday 11 April 2026 05:47:14 +0000 (0:00:01.216) 0:37:10.773 ******** 2026-04-11 05:47:45.475932 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475944 | orchestrator | 2026-04-11 05:47:45.475956 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:47:45.475969 | orchestrator | Saturday 11 April 2026 05:47:15 +0000 (0:00:01.132) 0:37:11.906 ******** 2026-04-11 05:47:45.475982 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.475994 | orchestrator | 2026-04-11 05:47:45.476004 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:47:45.476015 | orchestrator | Saturday 11 April 2026 05:47:16 +0000 (0:00:01.132) 0:37:13.038 ******** 2026-04-11 05:47:45.476025 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476036 | orchestrator | 2026-04-11 05:47:45.476046 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:47:45.476057 | orchestrator | Saturday 11 April 2026 05:47:17 +0000 (0:00:01.138) 0:37:14.176 ******** 2026-04-11 05:47:45.476067 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:47:45.476078 | orchestrator | 2026-04-11 05:47:45.476088 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:47:45.476099 | orchestrator | Saturday 11 April 2026 05:47:20 +0000 (0:00:02.428) 0:37:16.605 ******** 2026-04-11 05:47:45.476110 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:47:45.476120 | orchestrator | 2026-04-11 05:47:45.476131 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:47:45.476141 | orchestrator | Saturday 11 April 2026 05:47:21 +0000 (0:00:01.181) 0:37:17.786 ******** 2026-04-11 05:47:45.476213 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-11 05:47:45.476254 | orchestrator | 2026-04-11 05:47:45.476367 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:47:45.476384 | orchestrator | Saturday 11 April 2026 05:47:22 +0000 (0:00:01.152) 0:37:18.939 ******** 2026-04-11 05:47:45.476395 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476405 | orchestrator | 2026-04-11 05:47:45.476416 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:47:45.476427 | orchestrator | Saturday 11 April 2026 05:47:23 +0000 (0:00:01.264) 0:37:20.203 ******** 2026-04-11 05:47:45.476438 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476448 | orchestrator | 2026-04-11 05:47:45.476459 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:47:45.476470 | orchestrator | Saturday 11 April 2026 05:47:25 +0000 (0:00:01.117) 0:37:21.321 ******** 2026-04-11 05:47:45.476480 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476491 | orchestrator | 2026-04-11 05:47:45.476502 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:47:45.476512 | orchestrator | Saturday 11 April 2026 05:47:26 +0000 (0:00:01.214) 0:37:22.536 ******** 2026-04-11 05:47:45.476523 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476534 | orchestrator | 2026-04-11 05:47:45.476544 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:47:45.476555 | orchestrator | Saturday 11 April 2026 05:47:27 +0000 (0:00:01.256) 0:37:23.793 ******** 2026-04-11 05:47:45.476566 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476634 | orchestrator | 2026-04-11 05:47:45.476674 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:47:45.476685 | orchestrator | Saturday 11 April 2026 05:47:28 +0000 (0:00:01.158) 0:37:24.952 ******** 2026-04-11 05:47:45.476696 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476706 | orchestrator | 2026-04-11 05:47:45.476717 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:47:45.476728 | orchestrator | Saturday 11 April 2026 05:47:29 +0000 (0:00:01.210) 0:37:26.163 ******** 2026-04-11 05:47:45.476739 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476759 | orchestrator | 2026-04-11 05:47:45.476770 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:47:45.476781 | orchestrator | Saturday 11 April 2026 05:47:31 +0000 (0:00:01.192) 0:37:27.355 ******** 2026-04-11 05:47:45.476791 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:47:45.476802 | orchestrator | 2026-04-11 05:47:45.476813 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:47:45.476823 | orchestrator | Saturday 11 April 2026 05:47:32 +0000 (0:00:01.157) 0:37:28.513 ******** 2026-04-11 05:47:45.476834 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:47:45.476845 | orchestrator | 2026-04-11 05:47:45.476855 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:47:45.476866 | orchestrator | Saturday 11 April 2026 05:47:33 +0000 (0:00:01.147) 0:37:29.660 ******** 2026-04-11 05:47:45.476876 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-11 05:47:45.476887 | orchestrator | 2026-04-11 05:47:45.476898 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:47:45.476909 | orchestrator | Saturday 11 April 2026 05:47:34 +0000 (0:00:01.134) 0:37:30.795 ******** 2026-04-11 05:47:45.476919 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-11 05:47:45.476930 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-11 05:47:45.476941 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-11 05:47:45.476952 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-11 05:47:45.476963 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-11 05:47:45.476973 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-11 05:47:45.476984 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-11 05:47:45.476994 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:47:45.477005 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:47:45.477016 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:47:45.477033 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:47:45.477045 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:47:45.477055 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:47:45.477066 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:47:45.477076 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-11 05:47:45.477087 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-11 05:47:45.477098 | orchestrator | 2026-04-11 05:47:45.477109 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:47:45.477119 | orchestrator | Saturday 11 April 2026 05:47:41 +0000 (0:00:06.512) 0:37:37.307 ******** 2026-04-11 05:47:45.477130 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-11 05:47:45.477140 | orchestrator | 2026-04-11 05:47:45.477151 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 05:47:45.477162 | orchestrator | Saturday 11 April 2026 05:47:42 +0000 (0:00:01.484) 0:37:38.791 ******** 2026-04-11 05:47:45.477172 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 05:47:45.477184 | orchestrator | 2026-04-11 05:47:45.477195 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 05:47:45.477205 | orchestrator | Saturday 11 April 2026 05:47:44 +0000 (0:00:01.574) 0:37:40.366 ******** 2026-04-11 05:47:45.477216 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 05:47:45.477227 | orchestrator | 2026-04-11 05:47:45.477245 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:48:36.342412 | orchestrator | Saturday 11 April 2026 05:47:46 +0000 (0:00:02.064) 0:37:42.431 ******** 2026-04-11 05:48:36.342506 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342516 | orchestrator | 2026-04-11 05:48:36.342524 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:48:36.342530 | orchestrator | Saturday 11 April 2026 05:47:47 +0000 (0:00:01.132) 0:37:43.563 ******** 2026-04-11 05:48:36.342537 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342543 | orchestrator | 2026-04-11 05:48:36.342549 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:48:36.342555 | orchestrator | Saturday 11 April 2026 05:47:48 +0000 (0:00:01.147) 0:37:44.711 ******** 2026-04-11 05:48:36.342561 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342567 | orchestrator | 2026-04-11 05:48:36.342573 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:48:36.342579 | orchestrator | Saturday 11 April 2026 05:47:49 +0000 (0:00:01.104) 0:37:45.815 ******** 2026-04-11 05:48:36.342584 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342590 | orchestrator | 2026-04-11 05:48:36.342596 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:48:36.342602 | orchestrator | Saturday 11 April 2026 05:47:50 +0000 (0:00:01.193) 0:37:47.009 ******** 2026-04-11 05:48:36.342607 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342613 | orchestrator | 2026-04-11 05:48:36.342619 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:48:36.342626 | orchestrator | Saturday 11 April 2026 05:47:51 +0000 (0:00:01.135) 0:37:48.144 ******** 2026-04-11 05:48:36.342632 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342638 | orchestrator | 2026-04-11 05:48:36.342644 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:48:36.342650 | orchestrator | Saturday 11 April 2026 05:47:53 +0000 (0:00:01.183) 0:37:49.328 ******** 2026-04-11 05:48:36.342655 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342661 | orchestrator | 2026-04-11 05:48:36.342667 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:48:36.342673 | orchestrator | Saturday 11 April 2026 05:47:54 +0000 (0:00:01.134) 0:37:50.463 ******** 2026-04-11 05:48:36.342679 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342684 | orchestrator | 2026-04-11 05:48:36.342690 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:48:36.342696 | orchestrator | Saturday 11 April 2026 05:47:55 +0000 (0:00:01.195) 0:37:51.658 ******** 2026-04-11 05:48:36.342702 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342708 | orchestrator | 2026-04-11 05:48:36.342714 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:48:36.342719 | orchestrator | Saturday 11 April 2026 05:47:56 +0000 (0:00:01.136) 0:37:52.794 ******** 2026-04-11 05:48:36.342726 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342731 | orchestrator | 2026-04-11 05:48:36.342737 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:48:36.342743 | orchestrator | Saturday 11 April 2026 05:47:57 +0000 (0:00:01.146) 0:37:53.941 ******** 2026-04-11 05:48:36.342749 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:48:36.342756 | orchestrator | 2026-04-11 05:48:36.342761 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:48:36.342767 | orchestrator | Saturday 11 April 2026 05:47:59 +0000 (0:00:01.271) 0:37:55.213 ******** 2026-04-11 05:48:36.342773 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:48:36.342779 | orchestrator | 2026-04-11 05:48:36.342785 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:48:36.342791 | orchestrator | Saturday 11 April 2026 05:48:03 +0000 (0:00:04.426) 0:37:59.639 ******** 2026-04-11 05:48:36.342796 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 05:48:36.342821 | orchestrator | 2026-04-11 05:48:36.342838 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:48:36.342844 | orchestrator | Saturday 11 April 2026 05:48:04 +0000 (0:00:01.185) 0:38:00.825 ******** 2026-04-11 05:48:36.342851 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-11 05:48:36.342860 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-11 05:48:36.342867 | orchestrator | 2026-04-11 05:48:36.342873 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:48:36.342879 | orchestrator | Saturday 11 April 2026 05:48:12 +0000 (0:00:08.318) 0:38:09.143 ******** 2026-04-11 05:48:36.342884 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342890 | orchestrator | 2026-04-11 05:48:36.342896 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:48:36.342902 | orchestrator | Saturday 11 April 2026 05:48:14 +0000 (0:00:01.131) 0:38:10.275 ******** 2026-04-11 05:48:36.342908 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342914 | orchestrator | 2026-04-11 05:48:36.342931 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:48:36.342937 | orchestrator | Saturday 11 April 2026 05:48:15 +0000 (0:00:01.166) 0:38:11.441 ******** 2026-04-11 05:48:36.342943 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342949 | orchestrator | 2026-04-11 05:48:36.342956 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:48:36.342962 | orchestrator | Saturday 11 April 2026 05:48:16 +0000 (0:00:01.194) 0:38:12.636 ******** 2026-04-11 05:48:36.342969 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.342975 | orchestrator | 2026-04-11 05:48:36.342981 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:48:36.342988 | orchestrator | Saturday 11 April 2026 05:48:17 +0000 (0:00:01.138) 0:38:13.774 ******** 2026-04-11 05:48:36.342995 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.343002 | orchestrator | 2026-04-11 05:48:36.343009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:48:36.343016 | orchestrator | Saturday 11 April 2026 05:48:18 +0000 (0:00:01.164) 0:38:14.939 ******** 2026-04-11 05:48:36.343022 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:48:36.343029 | orchestrator | 2026-04-11 05:48:36.343035 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:48:36.343042 | orchestrator | Saturday 11 April 2026 05:48:19 +0000 (0:00:01.249) 0:38:16.188 ******** 2026-04-11 05:48:36.343049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:48:36.343055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:48:36.343062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:48:36.343069 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.343076 | orchestrator | 2026-04-11 05:48:36.343082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:48:36.343089 | orchestrator | Saturday 11 April 2026 05:48:21 +0000 (0:00:01.425) 0:38:17.614 ******** 2026-04-11 05:48:36.343096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:48:36.343103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:48:36.343114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:48:36.343121 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.343127 | orchestrator | 2026-04-11 05:48:36.343134 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:48:36.343140 | orchestrator | Saturday 11 April 2026 05:48:22 +0000 (0:00:01.427) 0:38:19.042 ******** 2026-04-11 05:48:36.343147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 05:48:36.343154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 05:48:36.343161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 05:48:36.343167 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.343173 | orchestrator | 2026-04-11 05:48:36.343180 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:48:36.343187 | orchestrator | Saturday 11 April 2026 05:48:24 +0000 (0:00:01.438) 0:38:20.480 ******** 2026-04-11 05:48:36.343193 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:48:36.343200 | orchestrator | 2026-04-11 05:48:36.343206 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:48:36.343213 | orchestrator | Saturday 11 April 2026 05:48:25 +0000 (0:00:01.135) 0:38:21.616 ******** 2026-04-11 05:48:36.343219 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 05:48:36.343226 | orchestrator | 2026-04-11 05:48:36.343233 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:48:36.343239 | orchestrator | Saturday 11 April 2026 05:48:26 +0000 (0:00:01.358) 0:38:22.975 ******** 2026-04-11 05:48:36.343297 | orchestrator | changed: [testbed-node-3] 2026-04-11 05:48:36.343307 | orchestrator | 2026-04-11 05:48:36.343314 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-11 05:48:36.343321 | orchestrator | Saturday 11 April 2026 05:48:29 +0000 (0:00:02.303) 0:38:25.279 ******** 2026-04-11 05:48:36.343328 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:48:36.343334 | orchestrator | 2026-04-11 05:48:36.343343 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:48:36.343349 | orchestrator | Saturday 11 April 2026 05:48:30 +0000 (0:00:01.211) 0:38:26.490 ******** 2026-04-11 05:48:36.343355 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:48:36.343362 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:48:36.343367 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:48:36.343373 | orchestrator | 2026-04-11 05:48:36.343379 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-11 05:48:36.343385 | orchestrator | Saturday 11 April 2026 05:48:32 +0000 (0:00:01.726) 0:38:28.217 ******** 2026-04-11 05:48:36.343390 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-04-11 05:48:36.343396 | orchestrator | 2026-04-11 05:48:36.343402 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-11 05:48:36.343408 | orchestrator | Saturday 11 April 2026 05:48:33 +0000 (0:00:01.468) 0:38:29.686 ******** 2026-04-11 05:48:36.343413 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.343419 | orchestrator | 2026-04-11 05:48:36.343425 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-11 05:48:36.343431 | orchestrator | Saturday 11 April 2026 05:48:34 +0000 (0:00:01.158) 0:38:30.845 ******** 2026-04-11 05:48:36.343436 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:48:36.343442 | orchestrator | 2026-04-11 05:48:36.343448 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-11 05:48:36.343453 | orchestrator | Saturday 11 April 2026 05:48:35 +0000 (0:00:01.201) 0:38:32.046 ******** 2026-04-11 05:48:36.343459 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:48:36.343465 | orchestrator | 2026-04-11 05:48:36.343475 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-11 05:49:46.193505 | orchestrator | Saturday 11 April 2026 05:48:37 +0000 (0:00:01.483) 0:38:33.530 ******** 2026-04-11 05:49:46.193625 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:49:46.193641 | orchestrator | 2026-04-11 05:49:46.193654 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-11 05:49:46.193666 | orchestrator | Saturday 11 April 2026 05:48:38 +0000 (0:00:01.230) 0:38:34.761 ******** 2026-04-11 05:49:46.193677 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 05:49:46.193689 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 05:49:46.193701 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 05:49:46.193711 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 05:49:46.193722 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 05:49:46.193733 | orchestrator | 2026-04-11 05:49:46.193743 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-11 05:49:46.193754 | orchestrator | Saturday 11 April 2026 05:48:41 +0000 (0:00:02.947) 0:38:37.709 ******** 2026-04-11 05:49:46.193765 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.193777 | orchestrator | 2026-04-11 05:49:46.193788 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-11 05:49:46.193798 | orchestrator | Saturday 11 April 2026 05:48:42 +0000 (0:00:01.125) 0:38:38.835 ******** 2026-04-11 05:49:46.193809 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-04-11 05:49:46.193820 | orchestrator | 2026-04-11 05:49:46.193830 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-11 05:49:46.193841 | orchestrator | Saturday 11 April 2026 05:48:44 +0000 (0:00:01.655) 0:38:40.490 ******** 2026-04-11 05:49:46.193852 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 05:49:46.193862 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-11 05:49:46.193873 | orchestrator | 2026-04-11 05:49:46.193884 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-11 05:49:46.193894 | orchestrator | Saturday 11 April 2026 05:48:46 +0000 (0:00:01.849) 0:38:42.339 ******** 2026-04-11 05:49:46.193905 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:49:46.193916 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 05:49:46.193927 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 05:49:46.193937 | orchestrator | 2026-04-11 05:49:46.193948 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-11 05:49:46.193958 | orchestrator | Saturday 11 April 2026 05:48:49 +0000 (0:00:03.106) 0:38:45.446 ******** 2026-04-11 05:49:46.193969 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-11 05:49:46.193981 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 05:49:46.193992 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:49:46.194002 | orchestrator | 2026-04-11 05:49:46.194013 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-11 05:49:46.194098 | orchestrator | Saturday 11 April 2026 05:48:51 +0000 (0:00:01.931) 0:38:47.378 ******** 2026-04-11 05:49:46.194111 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.194124 | orchestrator | 2026-04-11 05:49:46.194136 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-11 05:49:46.194149 | orchestrator | Saturday 11 April 2026 05:48:52 +0000 (0:00:01.301) 0:38:48.680 ******** 2026-04-11 05:49:46.194161 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.194173 | orchestrator | 2026-04-11 05:49:46.194185 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-11 05:49:46.194197 | orchestrator | Saturday 11 April 2026 05:48:53 +0000 (0:00:01.144) 0:38:49.825 ******** 2026-04-11 05:49:46.194210 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.194272 | orchestrator | 2026-04-11 05:49:46.194301 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-11 05:49:46.194314 | orchestrator | Saturday 11 April 2026 05:48:54 +0000 (0:00:01.130) 0:38:50.955 ******** 2026-04-11 05:49:46.194326 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-04-11 05:49:46.194338 | orchestrator | 2026-04-11 05:49:46.194351 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-11 05:49:46.194363 | orchestrator | Saturday 11 April 2026 05:48:56 +0000 (0:00:01.517) 0:38:52.473 ******** 2026-04-11 05:49:46.194376 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:49:46.194388 | orchestrator | 2026-04-11 05:49:46.194399 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-11 05:49:46.194410 | orchestrator | Saturday 11 April 2026 05:48:57 +0000 (0:00:01.456) 0:38:53.929 ******** 2026-04-11 05:49:46.194420 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:49:46.194431 | orchestrator | 2026-04-11 05:49:46.194442 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-11 05:49:46.194452 | orchestrator | Saturday 11 April 2026 05:49:01 +0000 (0:00:03.484) 0:38:57.414 ******** 2026-04-11 05:49:46.194463 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-04-11 05:49:46.194473 | orchestrator | 2026-04-11 05:49:46.194484 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-11 05:49:46.194495 | orchestrator | Saturday 11 April 2026 05:49:02 +0000 (0:00:01.463) 0:38:58.878 ******** 2026-04-11 05:49:46.194506 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:49:46.194517 | orchestrator | 2026-04-11 05:49:46.194527 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-11 05:49:46.194538 | orchestrator | Saturday 11 April 2026 05:49:04 +0000 (0:00:01.967) 0:39:00.845 ******** 2026-04-11 05:49:46.194549 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:49:46.194559 | orchestrator | 2026-04-11 05:49:46.194570 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-11 05:49:46.194599 | orchestrator | Saturday 11 April 2026 05:49:06 +0000 (0:00:01.963) 0:39:02.809 ******** 2026-04-11 05:49:46.194616 | orchestrator | ok: [testbed-node-3] 2026-04-11 05:49:46.194634 | orchestrator | 2026-04-11 05:49:46.194652 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-11 05:49:46.194668 | orchestrator | Saturday 11 April 2026 05:49:08 +0000 (0:00:02.280) 0:39:05.090 ******** 2026-04-11 05:49:46.194686 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.194704 | orchestrator | 2026-04-11 05:49:46.194722 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-11 05:49:46.194738 | orchestrator | Saturday 11 April 2026 05:49:10 +0000 (0:00:01.150) 0:39:06.241 ******** 2026-04-11 05:49:46.194755 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.194774 | orchestrator | 2026-04-11 05:49:46.194791 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-11 05:49:46.194808 | orchestrator | Saturday 11 April 2026 05:49:11 +0000 (0:00:01.142) 0:39:07.384 ******** 2026-04-11 05:49:46.194826 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 05:49:46.194843 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-11 05:49:46.194862 | orchestrator | 2026-04-11 05:49:46.194881 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-11 05:49:46.194900 | orchestrator | Saturday 11 April 2026 05:49:12 +0000 (0:00:01.783) 0:39:09.167 ******** 2026-04-11 05:49:46.194919 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 05:49:46.194933 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-11 05:49:46.194944 | orchestrator | 2026-04-11 05:49:46.194954 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-11 05:49:46.194965 | orchestrator | Saturday 11 April 2026 05:49:15 +0000 (0:00:02.941) 0:39:12.109 ******** 2026-04-11 05:49:46.194976 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-11 05:49:46.194987 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-11 05:49:46.195009 | orchestrator | 2026-04-11 05:49:46.195020 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-11 05:49:46.195031 | orchestrator | Saturday 11 April 2026 05:49:20 +0000 (0:00:04.549) 0:39:16.659 ******** 2026-04-11 05:49:46.195042 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195052 | orchestrator | 2026-04-11 05:49:46.195063 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-11 05:49:46.195074 | orchestrator | Saturday 11 April 2026 05:49:21 +0000 (0:00:01.233) 0:39:17.893 ******** 2026-04-11 05:49:46.195084 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195095 | orchestrator | 2026-04-11 05:49:46.195106 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-11 05:49:46.195116 | orchestrator | Saturday 11 April 2026 05:49:22 +0000 (0:00:01.243) 0:39:19.137 ******** 2026-04-11 05:49:46.195127 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195137 | orchestrator | 2026-04-11 05:49:46.195153 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-11 05:49:46.195171 | orchestrator | Saturday 11 April 2026 05:49:24 +0000 (0:00:01.231) 0:39:20.369 ******** 2026-04-11 05:49:46.195189 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195208 | orchestrator | 2026-04-11 05:49:46.195226 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-11 05:49:46.195268 | orchestrator | Saturday 11 April 2026 05:49:25 +0000 (0:00:01.212) 0:39:21.581 ******** 2026-04-11 05:49:46.195287 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195305 | orchestrator | 2026-04-11 05:49:46.195319 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-11 05:49:46.195330 | orchestrator | Saturday 11 April 2026 05:49:26 +0000 (0:00:01.149) 0:39:22.731 ******** 2026-04-11 05:49:46.195341 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-11 05:49:46.195352 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-11 05:49:46.195371 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-11 05:49:46.195382 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-11 05:49:46.195392 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:49:46.195403 | orchestrator | 2026-04-11 05:49:46.195414 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 05:49:46.195425 | orchestrator | Saturday 11 April 2026 05:49:40 +0000 (0:00:13.982) 0:39:36.713 ******** 2026-04-11 05:49:46.195435 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195446 | orchestrator | 2026-04-11 05:49:46.195457 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 05:49:46.195468 | orchestrator | Saturday 11 April 2026 05:49:41 +0000 (0:00:01.118) 0:39:37.831 ******** 2026-04-11 05:49:46.195478 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195489 | orchestrator | 2026-04-11 05:49:46.195500 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 05:49:46.195510 | orchestrator | Saturday 11 April 2026 05:49:42 +0000 (0:00:01.111) 0:39:38.943 ******** 2026-04-11 05:49:46.195521 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195532 | orchestrator | 2026-04-11 05:49:46.195542 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 05:49:46.195553 | orchestrator | Saturday 11 April 2026 05:49:43 +0000 (0:00:01.173) 0:39:40.117 ******** 2026-04-11 05:49:46.195563 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195574 | orchestrator | 2026-04-11 05:49:46.195585 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 05:49:46.195595 | orchestrator | Saturday 11 April 2026 05:49:45 +0000 (0:00:01.157) 0:39:41.275 ******** 2026-04-11 05:49:46.195606 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:49:46.195625 | orchestrator | 2026-04-11 05:49:46.195636 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-11 05:49:46.195658 | orchestrator | Saturday 11 April 2026 05:49:46 +0000 (0:00:01.121) 0:39:42.397 ******** 2026-04-11 05:50:11.161911 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:50:11.162085 | orchestrator | 2026-04-11 05:50:11.162115 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 05:50:11.162134 | orchestrator | Saturday 11 April 2026 05:49:47 +0000 (0:00:01.131) 0:39:43.528 ******** 2026-04-11 05:50:11.162152 | orchestrator | skipping: [testbed-node-3] 2026-04-11 05:50:11.162170 | orchestrator | 2026-04-11 05:50:11.162187 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-11 05:50:11.162201 | orchestrator | 2026-04-11 05:50:11.162211 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:50:11.162221 | orchestrator | Saturday 11 April 2026 05:49:48 +0000 (0:00:01.117) 0:39:44.646 ******** 2026-04-11 05:50:11.162278 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-11 05:50:11.162292 | orchestrator | 2026-04-11 05:50:11.162301 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:50:11.162311 | orchestrator | Saturday 11 April 2026 05:49:49 +0000 (0:00:01.123) 0:39:45.769 ******** 2026-04-11 05:50:11.162321 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:11.162331 | orchestrator | 2026-04-11 05:50:11.162341 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:50:11.162350 | orchestrator | Saturday 11 April 2026 05:49:51 +0000 (0:00:01.520) 0:39:47.290 ******** 2026-04-11 05:50:11.162360 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:11.162370 | orchestrator | 2026-04-11 05:50:11.162379 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:50:11.162389 | orchestrator | Saturday 11 April 2026 05:49:52 +0000 (0:00:01.182) 0:39:48.472 ******** 2026-04-11 05:50:11.162399 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:11.162408 | orchestrator | 2026-04-11 05:50:11.162418 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:50:11.162427 | orchestrator | Saturday 11 April 2026 05:49:53 +0000 (0:00:01.432) 0:39:49.905 ******** 2026-04-11 05:50:11.162437 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:11.162447 | orchestrator | 2026-04-11 05:50:11.162456 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:50:11.162468 | orchestrator | Saturday 11 April 2026 05:49:54 +0000 (0:00:01.185) 0:39:51.091 ******** 2026-04-11 05:50:11.162479 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:11.162490 | orchestrator | 2026-04-11 05:50:11.162501 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:50:11.162512 | orchestrator | Saturday 11 April 2026 05:49:56 +0000 (0:00:01.181) 0:39:52.272 ******** 2026-04-11 05:50:11.162523 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:11.162534 | orchestrator | 2026-04-11 05:50:11.162546 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:50:11.162558 | orchestrator | Saturday 11 April 2026 05:49:57 +0000 (0:00:01.172) 0:39:53.445 ******** 2026-04-11 05:50:11.162569 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:11.162580 | orchestrator | 2026-04-11 05:50:11.162591 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:50:11.162603 | orchestrator | Saturday 11 April 2026 05:49:58 +0000 (0:00:01.172) 0:39:54.617 ******** 2026-04-11 05:50:11.162614 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:11.162625 | orchestrator | 2026-04-11 05:50:11.162636 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:50:11.162647 | orchestrator | Saturday 11 April 2026 05:49:59 +0000 (0:00:01.185) 0:39:55.803 ******** 2026-04-11 05:50:11.162658 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:50:11.162669 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:50:11.162701 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:50:11.162712 | orchestrator | 2026-04-11 05:50:11.162723 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:50:11.162734 | orchestrator | Saturday 11 April 2026 05:50:01 +0000 (0:00:02.115) 0:39:57.918 ******** 2026-04-11 05:50:11.162757 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:11.162768 | orchestrator | 2026-04-11 05:50:11.162780 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:50:11.162791 | orchestrator | Saturday 11 April 2026 05:50:02 +0000 (0:00:01.273) 0:39:59.192 ******** 2026-04-11 05:50:11.162802 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:50:11.162813 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:50:11.162824 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:50:11.162834 | orchestrator | 2026-04-11 05:50:11.162843 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:50:11.162856 | orchestrator | Saturday 11 April 2026 05:50:06 +0000 (0:00:03.212) 0:40:02.405 ******** 2026-04-11 05:50:11.162873 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 05:50:11.162890 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 05:50:11.162907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 05:50:11.162925 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:11.162944 | orchestrator | 2026-04-11 05:50:11.162961 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:50:11.162973 | orchestrator | Saturday 11 April 2026 05:50:07 +0000 (0:00:01.753) 0:40:04.158 ******** 2026-04-11 05:50:11.162984 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:50:11.163012 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:50:11.163023 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:50:11.163032 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:11.163042 | orchestrator | 2026-04-11 05:50:11.163052 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:50:11.163061 | orchestrator | Saturday 11 April 2026 05:50:09 +0000 (0:00:01.897) 0:40:06.056 ******** 2026-04-11 05:50:11.163073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:11.163085 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:11.163095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:11.163113 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:11.163123 | orchestrator | 2026-04-11 05:50:11.163133 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:50:11.163142 | orchestrator | Saturday 11 April 2026 05:50:11 +0000 (0:00:01.192) 0:40:07.249 ******** 2026-04-11 05:50:11.163162 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:50:03.503684', 'end': '2026-04-11 05:50:03.553267', 'delta': '0:00:00.049583', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:50:11.163183 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:50:04.389851', 'end': '2026-04-11 05:50:04.438998', 'delta': '0:00:00.049147', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:50:11.163211 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:50:04.949628', 'end': '2026-04-11 05:50:05.005672', 'delta': '0:00:00.056044', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:50:29.790388 | orchestrator | 2026-04-11 05:50:29.790511 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:50:29.790531 | orchestrator | Saturday 11 April 2026 05:50:12 +0000 (0:00:01.194) 0:40:08.443 ******** 2026-04-11 05:50:29.790544 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:29.790558 | orchestrator | 2026-04-11 05:50:29.790570 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:50:29.790581 | orchestrator | Saturday 11 April 2026 05:50:13 +0000 (0:00:01.244) 0:40:09.687 ******** 2026-04-11 05:50:29.790594 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:29.790608 | orchestrator | 2026-04-11 05:50:29.790621 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:50:29.790634 | orchestrator | Saturday 11 April 2026 05:50:14 +0000 (0:00:01.265) 0:40:10.953 ******** 2026-04-11 05:50:29.790646 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:29.790659 | orchestrator | 2026-04-11 05:50:29.790671 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:50:29.790684 | orchestrator | Saturday 11 April 2026 05:50:15 +0000 (0:00:01.156) 0:40:12.110 ******** 2026-04-11 05:50:29.790721 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:50:29.790736 | orchestrator | 2026-04-11 05:50:29.790748 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:50:29.790761 | orchestrator | Saturday 11 April 2026 05:50:17 +0000 (0:00:01.991) 0:40:14.101 ******** 2026-04-11 05:50:29.790773 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:29.790786 | orchestrator | 2026-04-11 05:50:29.790798 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:50:29.790810 | orchestrator | Saturday 11 April 2026 05:50:19 +0000 (0:00:01.134) 0:40:15.236 ******** 2026-04-11 05:50:29.790822 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:29.790834 | orchestrator | 2026-04-11 05:50:29.790846 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:50:29.790858 | orchestrator | Saturday 11 April 2026 05:50:20 +0000 (0:00:01.097) 0:40:16.334 ******** 2026-04-11 05:50:29.790871 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:29.790885 | orchestrator | 2026-04-11 05:50:29.790898 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:50:29.790911 | orchestrator | Saturday 11 April 2026 05:50:21 +0000 (0:00:01.267) 0:40:17.601 ******** 2026-04-11 05:50:29.790924 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:29.790938 | orchestrator | 2026-04-11 05:50:29.790950 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:50:29.790963 | orchestrator | Saturday 11 April 2026 05:50:22 +0000 (0:00:01.126) 0:40:18.728 ******** 2026-04-11 05:50:29.790976 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:29.790989 | orchestrator | 2026-04-11 05:50:29.791002 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:50:29.791015 | orchestrator | Saturday 11 April 2026 05:50:23 +0000 (0:00:01.158) 0:40:19.887 ******** 2026-04-11 05:50:29.791028 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:29.791041 | orchestrator | 2026-04-11 05:50:29.791054 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:50:29.791087 | orchestrator | Saturday 11 April 2026 05:50:24 +0000 (0:00:01.180) 0:40:21.068 ******** 2026-04-11 05:50:29.791109 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:29.791123 | orchestrator | 2026-04-11 05:50:29.791134 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:50:29.791147 | orchestrator | Saturday 11 April 2026 05:50:26 +0000 (0:00:01.176) 0:40:22.244 ******** 2026-04-11 05:50:29.791159 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:29.791172 | orchestrator | 2026-04-11 05:50:29.791183 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:50:29.791211 | orchestrator | Saturday 11 April 2026 05:50:27 +0000 (0:00:01.187) 0:40:23.432 ******** 2026-04-11 05:50:29.791224 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:29.791255 | orchestrator | 2026-04-11 05:50:29.791267 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:50:29.791279 | orchestrator | Saturday 11 April 2026 05:50:28 +0000 (0:00:01.211) 0:40:24.644 ******** 2026-04-11 05:50:29.791290 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:50:29.791301 | orchestrator | 2026-04-11 05:50:29.791313 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:50:29.791325 | orchestrator | Saturday 11 April 2026 05:50:29 +0000 (0:00:01.178) 0:40:25.822 ******** 2026-04-11 05:50:29.791341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:50:29.791377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'uuids': ['9d724d10-77ae-4967-ad2d-00bd58cf4b58'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E']}})  2026-04-11 05:50:29.791407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7ad0a670', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:50:29.791422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855']}})  2026-04-11 05:50:29.791436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:50:29.791449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:50:29.791468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:50:29.791482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:50:29.791503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh', 'dm-uuid-CRYPT-LUKS2-f995fcc5d8e74f9b8df633437ec8101a-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:50:29.791525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:50:31.242793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'uuids': ['f995fcc5-d8e7-4f9b-8df6-33437ec8101a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh']}})  2026-04-11 05:50:31.242911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2']}})  2026-04-11 05:50:31.242929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:50:31.242963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '122e9594', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:50:31.243015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:50:31.243027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:50:31.243037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E', 'dm-uuid-CRYPT-LUKS2-9d724d1077ae4967ad2d00bd58cf4b58-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:50:31.243048 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:50:31.243058 | orchestrator | 2026-04-11 05:50:31.243068 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:50:31.243078 | orchestrator | Saturday 11 April 2026 05:50:31 +0000 (0:00:01.490) 0:40:27.312 ******** 2026-04-11 05:50:31.243088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.243126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'uuids': ['9d724d10-77ae-4967-ad2d-00bd58cf4b58'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.243174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7ad0a670', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.243194 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.365749 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.365851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.365884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.365920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.365933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh', 'dm-uuid-CRYPT-LUKS2-f995fcc5d8e74f9b8df633437ec8101a-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.365946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.365979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'uuids': ['f995fcc5-d8e7-4f9b-8df6-33437ec8101a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.365999 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.366084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:50:31.366112 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '122e9594', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:51:00.427267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:51:00.427449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:51:00.427492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E', 'dm-uuid-CRYPT-LUKS2-9d724d1077ae4967ad2d00bd58cf4b58-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:51:00.427508 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.427522 | orchestrator | 2026-04-11 05:51:00.427534 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:51:00.427547 | orchestrator | Saturday 11 April 2026 05:50:32 +0000 (0:00:01.498) 0:40:28.811 ******** 2026-04-11 05:51:00.427558 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:00.427570 | orchestrator | 2026-04-11 05:51:00.427582 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:51:00.427592 | orchestrator | Saturday 11 April 2026 05:50:34 +0000 (0:00:01.499) 0:40:30.310 ******** 2026-04-11 05:51:00.427603 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:00.427614 | orchestrator | 2026-04-11 05:51:00.427624 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:51:00.427635 | orchestrator | Saturday 11 April 2026 05:50:35 +0000 (0:00:01.140) 0:40:31.451 ******** 2026-04-11 05:51:00.427646 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:00.427657 | orchestrator | 2026-04-11 05:51:00.427667 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:51:00.427678 | orchestrator | Saturday 11 April 2026 05:50:36 +0000 (0:00:01.525) 0:40:32.976 ******** 2026-04-11 05:51:00.427689 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.427700 | orchestrator | 2026-04-11 05:51:00.427710 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:51:00.427723 | orchestrator | Saturday 11 April 2026 05:50:37 +0000 (0:00:01.131) 0:40:34.108 ******** 2026-04-11 05:51:00.427736 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.427749 | orchestrator | 2026-04-11 05:51:00.427762 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:51:00.427775 | orchestrator | Saturday 11 April 2026 05:50:39 +0000 (0:00:01.245) 0:40:35.354 ******** 2026-04-11 05:51:00.427787 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.427800 | orchestrator | 2026-04-11 05:51:00.427812 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:51:00.427825 | orchestrator | Saturday 11 April 2026 05:50:40 +0000 (0:00:01.205) 0:40:36.560 ******** 2026-04-11 05:51:00.427838 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-11 05:51:00.427851 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-11 05:51:00.427864 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-11 05:51:00.427877 | orchestrator | 2026-04-11 05:51:00.427889 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:51:00.427902 | orchestrator | Saturday 11 April 2026 05:50:42 +0000 (0:00:02.087) 0:40:38.647 ******** 2026-04-11 05:51:00.427916 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 05:51:00.427929 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 05:51:00.427949 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 05:51:00.427963 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.427976 | orchestrator | 2026-04-11 05:51:00.427989 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:51:00.428002 | orchestrator | Saturday 11 April 2026 05:50:43 +0000 (0:00:01.192) 0:40:39.840 ******** 2026-04-11 05:51:00.428033 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-11 05:51:00.428047 | orchestrator | 2026-04-11 05:51:00.428062 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:51:00.428077 | orchestrator | Saturday 11 April 2026 05:50:44 +0000 (0:00:01.348) 0:40:41.188 ******** 2026-04-11 05:51:00.428088 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.428098 | orchestrator | 2026-04-11 05:51:00.428109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:51:00.428120 | orchestrator | Saturday 11 April 2026 05:50:46 +0000 (0:00:01.209) 0:40:42.398 ******** 2026-04-11 05:51:00.428130 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.428141 | orchestrator | 2026-04-11 05:51:00.428152 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:51:00.428163 | orchestrator | Saturday 11 April 2026 05:50:47 +0000 (0:00:01.155) 0:40:43.553 ******** 2026-04-11 05:51:00.428173 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.428184 | orchestrator | 2026-04-11 05:51:00.428195 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:51:00.428211 | orchestrator | Saturday 11 April 2026 05:50:48 +0000 (0:00:01.203) 0:40:44.757 ******** 2026-04-11 05:51:00.428222 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:00.428233 | orchestrator | 2026-04-11 05:51:00.428287 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:51:00.428299 | orchestrator | Saturday 11 April 2026 05:50:49 +0000 (0:00:01.257) 0:40:46.014 ******** 2026-04-11 05:51:00.428320 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 05:51:00.428332 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 05:51:00.428343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 05:51:00.428353 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.428364 | orchestrator | 2026-04-11 05:51:00.428375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:51:00.428385 | orchestrator | Saturday 11 April 2026 05:50:51 +0000 (0:00:01.455) 0:40:47.470 ******** 2026-04-11 05:51:00.428396 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 05:51:00.428408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 05:51:00.428418 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 05:51:00.428429 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.428440 | orchestrator | 2026-04-11 05:51:00.428450 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:51:00.428461 | orchestrator | Saturday 11 April 2026 05:50:52 +0000 (0:00:01.397) 0:40:48.867 ******** 2026-04-11 05:51:00.428472 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 05:51:00.428483 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 05:51:00.428493 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 05:51:00.428504 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:00.428514 | orchestrator | 2026-04-11 05:51:00.428525 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:51:00.428536 | orchestrator | Saturday 11 April 2026 05:50:54 +0000 (0:00:01.360) 0:40:50.227 ******** 2026-04-11 05:51:00.428547 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:00.428557 | orchestrator | 2026-04-11 05:51:00.428568 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:51:00.428587 | orchestrator | Saturday 11 April 2026 05:50:55 +0000 (0:00:01.148) 0:40:51.376 ******** 2026-04-11 05:51:00.428598 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 05:51:00.428609 | orchestrator | 2026-04-11 05:51:00.428620 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:51:00.428630 | orchestrator | Saturday 11 April 2026 05:50:56 +0000 (0:00:01.344) 0:40:52.721 ******** 2026-04-11 05:51:00.428641 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:51:00.428651 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:51:00.428662 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:51:00.428673 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:51:00.428683 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-11 05:51:00.428694 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:51:00.428705 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:51:00.428715 | orchestrator | 2026-04-11 05:51:00.428726 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:51:00.428737 | orchestrator | Saturday 11 April 2026 05:50:58 +0000 (0:00:02.166) 0:40:54.887 ******** 2026-04-11 05:51:00.428747 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:51:00.428758 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:51:00.428768 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:51:00.428779 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:51:00.428790 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-11 05:51:00.428800 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 05:51:00.428811 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:51:00.428822 | orchestrator | 2026-04-11 05:51:00.428839 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-11 05:51:43.729524 | orchestrator | Saturday 11 April 2026 05:51:00 +0000 (0:00:02.301) 0:40:57.189 ******** 2026-04-11 05:51:43.729646 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.729662 | orchestrator | 2026-04-11 05:51:43.729675 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-11 05:51:43.729687 | orchestrator | Saturday 11 April 2026 05:51:02 +0000 (0:00:01.162) 0:40:58.352 ******** 2026-04-11 05:51:43.729698 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.729709 | orchestrator | 2026-04-11 05:51:43.729719 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-11 05:51:43.729730 | orchestrator | Saturday 11 April 2026 05:51:02 +0000 (0:00:00.794) 0:40:59.147 ******** 2026-04-11 05:51:43.729741 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.729752 | orchestrator | 2026-04-11 05:51:43.729763 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-11 05:51:43.729774 | orchestrator | Saturday 11 April 2026 05:51:03 +0000 (0:00:00.871) 0:41:00.019 ******** 2026-04-11 05:51:43.729786 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-11 05:51:43.729798 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-11 05:51:43.729809 | orchestrator | 2026-04-11 05:51:43.729835 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:51:43.729846 | orchestrator | Saturday 11 April 2026 05:51:07 +0000 (0:00:03.814) 0:41:03.833 ******** 2026-04-11 05:51:43.729857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-11 05:51:43.729869 | orchestrator | 2026-04-11 05:51:43.729880 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:51:43.729914 | orchestrator | Saturday 11 April 2026 05:51:08 +0000 (0:00:01.123) 0:41:04.957 ******** 2026-04-11 05:51:43.729925 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-11 05:51:43.729936 | orchestrator | 2026-04-11 05:51:43.729955 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:51:43.729975 | orchestrator | Saturday 11 April 2026 05:51:09 +0000 (0:00:01.099) 0:41:06.057 ******** 2026-04-11 05:51:43.729994 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730013 | orchestrator | 2026-04-11 05:51:43.730095 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:51:43.730108 | orchestrator | Saturday 11 April 2026 05:51:11 +0000 (0:00:01.201) 0:41:07.259 ******** 2026-04-11 05:51:43.730120 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730132 | orchestrator | 2026-04-11 05:51:43.730145 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:51:43.730157 | orchestrator | Saturday 11 April 2026 05:51:12 +0000 (0:00:01.516) 0:41:08.775 ******** 2026-04-11 05:51:43.730169 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730182 | orchestrator | 2026-04-11 05:51:43.730194 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:51:43.730207 | orchestrator | Saturday 11 April 2026 05:51:14 +0000 (0:00:01.620) 0:41:10.396 ******** 2026-04-11 05:51:43.730219 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730231 | orchestrator | 2026-04-11 05:51:43.730243 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:51:43.730280 | orchestrator | Saturday 11 April 2026 05:51:15 +0000 (0:00:01.597) 0:41:11.994 ******** 2026-04-11 05:51:43.730291 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730302 | orchestrator | 2026-04-11 05:51:43.730313 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:51:43.730323 | orchestrator | Saturday 11 April 2026 05:51:16 +0000 (0:00:01.155) 0:41:13.149 ******** 2026-04-11 05:51:43.730334 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730345 | orchestrator | 2026-04-11 05:51:43.730356 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:51:43.730366 | orchestrator | Saturday 11 April 2026 05:51:18 +0000 (0:00:01.170) 0:41:14.320 ******** 2026-04-11 05:51:43.730377 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730387 | orchestrator | 2026-04-11 05:51:43.730398 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:51:43.730409 | orchestrator | Saturday 11 April 2026 05:51:19 +0000 (0:00:01.147) 0:41:15.468 ******** 2026-04-11 05:51:43.730419 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730430 | orchestrator | 2026-04-11 05:51:43.730440 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:51:43.730451 | orchestrator | Saturday 11 April 2026 05:51:20 +0000 (0:00:01.564) 0:41:17.032 ******** 2026-04-11 05:51:43.730461 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730472 | orchestrator | 2026-04-11 05:51:43.730482 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:51:43.730493 | orchestrator | Saturday 11 April 2026 05:51:22 +0000 (0:00:01.541) 0:41:18.574 ******** 2026-04-11 05:51:43.730504 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730514 | orchestrator | 2026-04-11 05:51:43.730525 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:51:43.730535 | orchestrator | Saturday 11 April 2026 05:51:23 +0000 (0:00:00.768) 0:41:19.342 ******** 2026-04-11 05:51:43.730546 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730556 | orchestrator | 2026-04-11 05:51:43.730567 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:51:43.730578 | orchestrator | Saturday 11 April 2026 05:51:23 +0000 (0:00:00.768) 0:41:20.111 ******** 2026-04-11 05:51:43.730588 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730609 | orchestrator | 2026-04-11 05:51:43.730620 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:51:43.730630 | orchestrator | Saturday 11 April 2026 05:51:24 +0000 (0:00:00.807) 0:41:20.919 ******** 2026-04-11 05:51:43.730641 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730651 | orchestrator | 2026-04-11 05:51:43.730662 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:51:43.730672 | orchestrator | Saturday 11 April 2026 05:51:25 +0000 (0:00:00.825) 0:41:21.744 ******** 2026-04-11 05:51:43.730683 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730694 | orchestrator | 2026-04-11 05:51:43.730723 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:51:43.730734 | orchestrator | Saturday 11 April 2026 05:51:26 +0000 (0:00:00.799) 0:41:22.544 ******** 2026-04-11 05:51:43.730745 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730756 | orchestrator | 2026-04-11 05:51:43.730767 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:51:43.730777 | orchestrator | Saturday 11 April 2026 05:51:27 +0000 (0:00:00.762) 0:41:23.307 ******** 2026-04-11 05:51:43.730788 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730798 | orchestrator | 2026-04-11 05:51:43.730809 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:51:43.730820 | orchestrator | Saturday 11 April 2026 05:51:27 +0000 (0:00:00.756) 0:41:24.063 ******** 2026-04-11 05:51:43.730830 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730841 | orchestrator | 2026-04-11 05:51:43.730852 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:51:43.730862 | orchestrator | Saturday 11 April 2026 05:51:28 +0000 (0:00:00.837) 0:41:24.901 ******** 2026-04-11 05:51:43.730873 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730884 | orchestrator | 2026-04-11 05:51:43.730901 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:51:43.730912 | orchestrator | Saturday 11 April 2026 05:51:29 +0000 (0:00:00.795) 0:41:25.697 ******** 2026-04-11 05:51:43.730923 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.730933 | orchestrator | 2026-04-11 05:51:43.730944 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:51:43.730955 | orchestrator | Saturday 11 April 2026 05:51:30 +0000 (0:00:00.945) 0:41:26.642 ******** 2026-04-11 05:51:43.730966 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.730976 | orchestrator | 2026-04-11 05:51:43.730987 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:51:43.730997 | orchestrator | Saturday 11 April 2026 05:51:31 +0000 (0:00:00.792) 0:41:27.435 ******** 2026-04-11 05:51:43.731008 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731019 | orchestrator | 2026-04-11 05:51:43.731029 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:51:43.731040 | orchestrator | Saturday 11 April 2026 05:51:32 +0000 (0:00:00.784) 0:41:28.219 ******** 2026-04-11 05:51:43.731050 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731061 | orchestrator | 2026-04-11 05:51:43.731072 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:51:43.731082 | orchestrator | Saturday 11 April 2026 05:51:32 +0000 (0:00:00.853) 0:41:29.073 ******** 2026-04-11 05:51:43.731093 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731103 | orchestrator | 2026-04-11 05:51:43.731114 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:51:43.731124 | orchestrator | Saturday 11 April 2026 05:51:33 +0000 (0:00:00.809) 0:41:29.883 ******** 2026-04-11 05:51:43.731135 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731146 | orchestrator | 2026-04-11 05:51:43.731156 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:51:43.731167 | orchestrator | Saturday 11 April 2026 05:51:34 +0000 (0:00:00.761) 0:41:30.644 ******** 2026-04-11 05:51:43.731178 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731195 | orchestrator | 2026-04-11 05:51:43.731206 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:51:43.731216 | orchestrator | Saturday 11 April 2026 05:51:35 +0000 (0:00:00.814) 0:41:31.459 ******** 2026-04-11 05:51:43.731227 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731238 | orchestrator | 2026-04-11 05:51:43.731248 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:51:43.731277 | orchestrator | Saturday 11 April 2026 05:51:36 +0000 (0:00:00.783) 0:41:32.243 ******** 2026-04-11 05:51:43.731288 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731298 | orchestrator | 2026-04-11 05:51:43.731309 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:51:43.731320 | orchestrator | Saturday 11 April 2026 05:51:36 +0000 (0:00:00.768) 0:41:33.011 ******** 2026-04-11 05:51:43.731330 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731341 | orchestrator | 2026-04-11 05:51:43.731352 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:51:43.731362 | orchestrator | Saturday 11 April 2026 05:51:37 +0000 (0:00:00.767) 0:41:33.779 ******** 2026-04-11 05:51:43.731373 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731384 | orchestrator | 2026-04-11 05:51:43.731395 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:51:43.731405 | orchestrator | Saturday 11 April 2026 05:51:38 +0000 (0:00:00.832) 0:41:34.611 ******** 2026-04-11 05:51:43.731416 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731427 | orchestrator | 2026-04-11 05:51:43.731437 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:51:43.731448 | orchestrator | Saturday 11 April 2026 05:51:39 +0000 (0:00:00.809) 0:41:35.421 ******** 2026-04-11 05:51:43.731459 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:51:43.731469 | orchestrator | 2026-04-11 05:51:43.731480 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:51:43.731491 | orchestrator | Saturday 11 April 2026 05:51:40 +0000 (0:00:00.876) 0:41:36.297 ******** 2026-04-11 05:51:43.731502 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.731512 | orchestrator | 2026-04-11 05:51:43.731523 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:51:43.731534 | orchestrator | Saturday 11 April 2026 05:51:41 +0000 (0:00:01.552) 0:41:37.850 ******** 2026-04-11 05:51:43.731544 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:51:43.731555 | orchestrator | 2026-04-11 05:51:43.731566 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:51:43.731576 | orchestrator | Saturday 11 April 2026 05:51:43 +0000 (0:00:01.854) 0:41:39.704 ******** 2026-04-11 05:51:43.731587 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-11 05:51:43.731598 | orchestrator | 2026-04-11 05:51:43.731615 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:52:28.024995 | orchestrator | Saturday 11 April 2026 05:51:44 +0000 (0:00:01.125) 0:41:40.830 ******** 2026-04-11 05:52:28.025105 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025120 | orchestrator | 2026-04-11 05:52:28.025132 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:52:28.025143 | orchestrator | Saturday 11 April 2026 05:51:45 +0000 (0:00:01.123) 0:41:41.954 ******** 2026-04-11 05:52:28.025153 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025163 | orchestrator | 2026-04-11 05:52:28.025173 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:52:28.025183 | orchestrator | Saturday 11 April 2026 05:51:46 +0000 (0:00:01.133) 0:41:43.087 ******** 2026-04-11 05:52:28.025193 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:52:28.025203 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:52:28.025214 | orchestrator | 2026-04-11 05:52:28.025223 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:52:28.025369 | orchestrator | Saturday 11 April 2026 05:51:48 +0000 (0:00:01.819) 0:41:44.906 ******** 2026-04-11 05:52:28.025385 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:52:28.025396 | orchestrator | 2026-04-11 05:52:28.025406 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:52:28.025415 | orchestrator | Saturday 11 April 2026 05:51:50 +0000 (0:00:01.485) 0:41:46.392 ******** 2026-04-11 05:52:28.025425 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025435 | orchestrator | 2026-04-11 05:52:28.025445 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:52:28.025454 | orchestrator | Saturday 11 April 2026 05:51:51 +0000 (0:00:01.139) 0:41:47.532 ******** 2026-04-11 05:52:28.025464 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025473 | orchestrator | 2026-04-11 05:52:28.025483 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:52:28.025493 | orchestrator | Saturday 11 April 2026 05:51:52 +0000 (0:00:00.764) 0:41:48.297 ******** 2026-04-11 05:52:28.025502 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025512 | orchestrator | 2026-04-11 05:52:28.025521 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:52:28.025531 | orchestrator | Saturday 11 April 2026 05:51:52 +0000 (0:00:00.768) 0:41:49.066 ******** 2026-04-11 05:52:28.025543 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-11 05:52:28.025555 | orchestrator | 2026-04-11 05:52:28.025566 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:52:28.025577 | orchestrator | Saturday 11 April 2026 05:51:54 +0000 (0:00:01.255) 0:41:50.322 ******** 2026-04-11 05:52:28.025588 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:52:28.025599 | orchestrator | 2026-04-11 05:52:28.025611 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:52:28.025622 | orchestrator | Saturday 11 April 2026 05:51:55 +0000 (0:00:01.737) 0:41:52.060 ******** 2026-04-11 05:52:28.025634 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:52:28.025645 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:52:28.025656 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:52:28.025667 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025679 | orchestrator | 2026-04-11 05:52:28.025690 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:52:28.025701 | orchestrator | Saturday 11 April 2026 05:51:56 +0000 (0:00:01.138) 0:41:53.198 ******** 2026-04-11 05:52:28.025712 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025723 | orchestrator | 2026-04-11 05:52:28.025735 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:52:28.025747 | orchestrator | Saturday 11 April 2026 05:51:58 +0000 (0:00:01.102) 0:41:54.301 ******** 2026-04-11 05:52:28.025758 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025768 | orchestrator | 2026-04-11 05:52:28.025780 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:52:28.025791 | orchestrator | Saturday 11 April 2026 05:51:59 +0000 (0:00:01.173) 0:41:55.474 ******** 2026-04-11 05:52:28.025802 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025814 | orchestrator | 2026-04-11 05:52:28.025825 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:52:28.025836 | orchestrator | Saturday 11 April 2026 05:52:00 +0000 (0:00:01.160) 0:41:56.637 ******** 2026-04-11 05:52:28.025847 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025858 | orchestrator | 2026-04-11 05:52:28.025869 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:52:28.025881 | orchestrator | Saturday 11 April 2026 05:52:01 +0000 (0:00:01.148) 0:41:57.785 ******** 2026-04-11 05:52:28.025893 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.025913 | orchestrator | 2026-04-11 05:52:28.025923 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:52:28.025933 | orchestrator | Saturday 11 April 2026 05:52:02 +0000 (0:00:00.812) 0:41:58.598 ******** 2026-04-11 05:52:28.025942 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:52:28.025952 | orchestrator | 2026-04-11 05:52:28.025961 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:52:28.025971 | orchestrator | Saturday 11 April 2026 05:52:04 +0000 (0:00:02.137) 0:42:00.736 ******** 2026-04-11 05:52:28.025980 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:52:28.025990 | orchestrator | 2026-04-11 05:52:28.025999 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:52:28.026009 | orchestrator | Saturday 11 April 2026 05:52:05 +0000 (0:00:00.814) 0:42:01.551 ******** 2026-04-11 05:52:28.026076 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-11 05:52:28.026087 | orchestrator | 2026-04-11 05:52:28.026113 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:52:28.026124 | orchestrator | Saturday 11 April 2026 05:52:06 +0000 (0:00:01.157) 0:42:02.709 ******** 2026-04-11 05:52:28.026134 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.026143 | orchestrator | 2026-04-11 05:52:28.026153 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:52:28.026163 | orchestrator | Saturday 11 April 2026 05:52:07 +0000 (0:00:01.178) 0:42:03.887 ******** 2026-04-11 05:52:28.026172 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.026182 | orchestrator | 2026-04-11 05:52:28.026191 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:52:28.026201 | orchestrator | Saturday 11 April 2026 05:52:08 +0000 (0:00:01.244) 0:42:05.132 ******** 2026-04-11 05:52:28.026211 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.026220 | orchestrator | 2026-04-11 05:52:28.026230 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:52:28.026239 | orchestrator | Saturday 11 April 2026 05:52:10 +0000 (0:00:01.215) 0:42:06.347 ******** 2026-04-11 05:52:28.026249 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.026259 | orchestrator | 2026-04-11 05:52:28.026297 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:52:28.026307 | orchestrator | Saturday 11 April 2026 05:52:11 +0000 (0:00:01.129) 0:42:07.477 ******** 2026-04-11 05:52:28.026317 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.026327 | orchestrator | 2026-04-11 05:52:28.026336 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:52:28.026346 | orchestrator | Saturday 11 April 2026 05:52:12 +0000 (0:00:01.194) 0:42:08.671 ******** 2026-04-11 05:52:28.026356 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.026365 | orchestrator | 2026-04-11 05:52:28.026375 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:52:28.026384 | orchestrator | Saturday 11 April 2026 05:52:13 +0000 (0:00:01.169) 0:42:09.841 ******** 2026-04-11 05:52:28.026394 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.026403 | orchestrator | 2026-04-11 05:52:28.026413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:52:28.026422 | orchestrator | Saturday 11 April 2026 05:52:14 +0000 (0:00:01.163) 0:42:11.005 ******** 2026-04-11 05:52:28.026432 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:52:28.026441 | orchestrator | 2026-04-11 05:52:28.026451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:52:28.026461 | orchestrator | Saturday 11 April 2026 05:52:15 +0000 (0:00:01.164) 0:42:12.170 ******** 2026-04-11 05:52:28.026470 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:52:28.026480 | orchestrator | 2026-04-11 05:52:28.026490 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:52:28.026499 | orchestrator | Saturday 11 April 2026 05:52:16 +0000 (0:00:00.800) 0:42:12.970 ******** 2026-04-11 05:52:28.026517 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-11 05:52:28.026540 | orchestrator | 2026-04-11 05:52:28.026550 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:52:28.026560 | orchestrator | Saturday 11 April 2026 05:52:17 +0000 (0:00:01.123) 0:42:14.094 ******** 2026-04-11 05:52:28.026580 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-11 05:52:28.026590 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-11 05:52:28.026600 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-11 05:52:28.026609 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-11 05:52:28.026619 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-11 05:52:28.026628 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-11 05:52:28.026638 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-11 05:52:28.026647 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:52:28.026657 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:52:28.026667 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:52:28.026676 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:52:28.026686 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:52:28.026695 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:52:28.026705 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:52:28.026715 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-11 05:52:28.026724 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-11 05:52:28.026734 | orchestrator | 2026-04-11 05:52:28.026743 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:52:28.026753 | orchestrator | Saturday 11 April 2026 05:52:24 +0000 (0:00:06.374) 0:42:20.469 ******** 2026-04-11 05:52:28.026763 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-11 05:52:28.026772 | orchestrator | 2026-04-11 05:52:28.026782 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 05:52:28.026791 | orchestrator | Saturday 11 April 2026 05:52:25 +0000 (0:00:01.250) 0:42:21.719 ******** 2026-04-11 05:52:28.026801 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 05:52:28.026812 | orchestrator | 2026-04-11 05:52:28.026822 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 05:52:28.026831 | orchestrator | Saturday 11 April 2026 05:52:27 +0000 (0:00:01.527) 0:42:23.247 ******** 2026-04-11 05:52:28.026841 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 05:52:28.026850 | orchestrator | 2026-04-11 05:52:28.026866 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:53:08.267896 | orchestrator | Saturday 11 April 2026 05:52:28 +0000 (0:00:01.668) 0:42:24.915 ******** 2026-04-11 05:53:08.268016 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268033 | orchestrator | 2026-04-11 05:53:08.268047 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:53:08.268058 | orchestrator | Saturday 11 April 2026 05:52:29 +0000 (0:00:00.786) 0:42:25.702 ******** 2026-04-11 05:53:08.268070 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268081 | orchestrator | 2026-04-11 05:53:08.268092 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:53:08.268103 | orchestrator | Saturday 11 April 2026 05:52:30 +0000 (0:00:00.781) 0:42:26.483 ******** 2026-04-11 05:53:08.268114 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268126 | orchestrator | 2026-04-11 05:53:08.268159 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:53:08.268171 | orchestrator | Saturday 11 April 2026 05:52:31 +0000 (0:00:00.789) 0:42:27.273 ******** 2026-04-11 05:53:08.268182 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268193 | orchestrator | 2026-04-11 05:53:08.268218 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:53:08.268229 | orchestrator | Saturday 11 April 2026 05:52:31 +0000 (0:00:00.778) 0:42:28.051 ******** 2026-04-11 05:53:08.268240 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268251 | orchestrator | 2026-04-11 05:53:08.268331 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:53:08.268347 | orchestrator | Saturday 11 April 2026 05:52:32 +0000 (0:00:00.832) 0:42:28.884 ******** 2026-04-11 05:53:08.268358 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268369 | orchestrator | 2026-04-11 05:53:08.268380 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:53:08.268391 | orchestrator | Saturday 11 April 2026 05:52:33 +0000 (0:00:00.776) 0:42:29.660 ******** 2026-04-11 05:53:08.268402 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268413 | orchestrator | 2026-04-11 05:53:08.268426 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:53:08.268439 | orchestrator | Saturday 11 April 2026 05:52:34 +0000 (0:00:00.850) 0:42:30.511 ******** 2026-04-11 05:53:08.268452 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268464 | orchestrator | 2026-04-11 05:53:08.268476 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:53:08.268489 | orchestrator | Saturday 11 April 2026 05:52:35 +0000 (0:00:00.820) 0:42:31.332 ******** 2026-04-11 05:53:08.268502 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268515 | orchestrator | 2026-04-11 05:53:08.268527 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:53:08.268540 | orchestrator | Saturday 11 April 2026 05:52:35 +0000 (0:00:00.795) 0:42:32.128 ******** 2026-04-11 05:53:08.268553 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268565 | orchestrator | 2026-04-11 05:53:08.268578 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:53:08.268590 | orchestrator | Saturday 11 April 2026 05:52:36 +0000 (0:00:00.781) 0:42:32.909 ******** 2026-04-11 05:53:08.268603 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:53:08.268617 | orchestrator | 2026-04-11 05:53:08.268630 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:53:08.268643 | orchestrator | Saturday 11 April 2026 05:52:37 +0000 (0:00:00.929) 0:42:33.839 ******** 2026-04-11 05:53:08.268656 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:53:08.268668 | orchestrator | 2026-04-11 05:53:08.268681 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:53:08.268693 | orchestrator | Saturday 11 April 2026 05:52:41 +0000 (0:00:04.170) 0:42:38.010 ******** 2026-04-11 05:53:08.268706 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 05:53:08.268720 | orchestrator | 2026-04-11 05:53:08.268732 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:53:08.268745 | orchestrator | Saturday 11 April 2026 05:52:42 +0000 (0:00:00.839) 0:42:38.849 ******** 2026-04-11 05:53:08.268760 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-11 05:53:08.268776 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-11 05:53:08.268798 | orchestrator | 2026-04-11 05:53:08.268809 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:53:08.268820 | orchestrator | Saturday 11 April 2026 05:52:50 +0000 (0:00:07.405) 0:42:46.255 ******** 2026-04-11 05:53:08.268831 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268842 | orchestrator | 2026-04-11 05:53:08.268853 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:53:08.268864 | orchestrator | Saturday 11 April 2026 05:52:50 +0000 (0:00:00.813) 0:42:47.069 ******** 2026-04-11 05:53:08.268875 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268886 | orchestrator | 2026-04-11 05:53:08.268913 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:53:08.268925 | orchestrator | Saturday 11 April 2026 05:52:51 +0000 (0:00:00.779) 0:42:47.848 ******** 2026-04-11 05:53:08.268936 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268947 | orchestrator | 2026-04-11 05:53:08.268958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:53:08.268969 | orchestrator | Saturday 11 April 2026 05:52:52 +0000 (0:00:00.797) 0:42:48.646 ******** 2026-04-11 05:53:08.268980 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.268991 | orchestrator | 2026-04-11 05:53:08.269002 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:53:08.269013 | orchestrator | Saturday 11 April 2026 05:52:53 +0000 (0:00:00.832) 0:42:49.478 ******** 2026-04-11 05:53:08.269024 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.269034 | orchestrator | 2026-04-11 05:53:08.269045 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:53:08.269056 | orchestrator | Saturday 11 April 2026 05:52:54 +0000 (0:00:00.816) 0:42:50.294 ******** 2026-04-11 05:53:08.269073 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:53:08.269084 | orchestrator | 2026-04-11 05:53:08.269095 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:53:08.269106 | orchestrator | Saturday 11 April 2026 05:52:54 +0000 (0:00:00.880) 0:42:51.175 ******** 2026-04-11 05:53:08.269117 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 05:53:08.269128 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 05:53:08.269139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 05:53:08.269150 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.269161 | orchestrator | 2026-04-11 05:53:08.269172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:53:08.269182 | orchestrator | Saturday 11 April 2026 05:52:56 +0000 (0:00:01.099) 0:42:52.275 ******** 2026-04-11 05:53:08.269193 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 05:53:08.269204 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 05:53:08.269215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 05:53:08.269226 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.269237 | orchestrator | 2026-04-11 05:53:08.269247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:53:08.269258 | orchestrator | Saturday 11 April 2026 05:52:57 +0000 (0:00:01.025) 0:42:53.300 ******** 2026-04-11 05:53:08.269298 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 05:53:08.269317 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 05:53:08.269337 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 05:53:08.269355 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.269371 | orchestrator | 2026-04-11 05:53:08.269382 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:53:08.269401 | orchestrator | Saturday 11 April 2026 05:52:58 +0000 (0:00:01.458) 0:42:54.759 ******** 2026-04-11 05:53:08.269413 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:53:08.269423 | orchestrator | 2026-04-11 05:53:08.269434 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:53:08.269445 | orchestrator | Saturday 11 April 2026 05:52:59 +0000 (0:00:00.797) 0:42:55.556 ******** 2026-04-11 05:53:08.269456 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 05:53:08.269467 | orchestrator | 2026-04-11 05:53:08.269478 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:53:08.269488 | orchestrator | Saturday 11 April 2026 05:53:00 +0000 (0:00:01.568) 0:42:57.125 ******** 2026-04-11 05:53:08.269499 | orchestrator | changed: [testbed-node-4] 2026-04-11 05:53:08.269510 | orchestrator | 2026-04-11 05:53:08.269521 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-11 05:53:08.269532 | orchestrator | Saturday 11 April 2026 05:53:02 +0000 (0:00:01.390) 0:42:58.515 ******** 2026-04-11 05:53:08.269543 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:53:08.269554 | orchestrator | 2026-04-11 05:53:08.269565 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:53:08.269576 | orchestrator | Saturday 11 April 2026 05:53:03 +0000 (0:00:00.787) 0:42:59.302 ******** 2026-04-11 05:53:08.269587 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:53:08.269599 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:53:08.269609 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:53:08.269620 | orchestrator | 2026-04-11 05:53:08.269631 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-11 05:53:08.269642 | orchestrator | Saturday 11 April 2026 05:53:04 +0000 (0:00:01.324) 0:43:00.627 ******** 2026-04-11 05:53:08.269653 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-04-11 05:53:08.269664 | orchestrator | 2026-04-11 05:53:08.269674 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-11 05:53:08.269685 | orchestrator | Saturday 11 April 2026 05:53:05 +0000 (0:00:01.137) 0:43:01.765 ******** 2026-04-11 05:53:08.269696 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.269707 | orchestrator | 2026-04-11 05:53:08.269718 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-11 05:53:08.269729 | orchestrator | Saturday 11 April 2026 05:53:06 +0000 (0:00:01.128) 0:43:02.893 ******** 2026-04-11 05:53:08.269740 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:53:08.269750 | orchestrator | 2026-04-11 05:53:08.269761 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-11 05:53:08.269772 | orchestrator | Saturday 11 April 2026 05:53:07 +0000 (0:00:01.110) 0:43:04.003 ******** 2026-04-11 05:53:08.269783 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:53:08.269794 | orchestrator | 2026-04-11 05:53:08.269812 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-11 05:54:11.582141 | orchestrator | Saturday 11 April 2026 05:53:09 +0000 (0:00:01.493) 0:43:05.497 ******** 2026-04-11 05:54:11.582325 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:54:11.582349 | orchestrator | 2026-04-11 05:54:11.582363 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-11 05:54:11.582374 | orchestrator | Saturday 11 April 2026 05:53:10 +0000 (0:00:01.129) 0:43:06.626 ******** 2026-04-11 05:54:11.582386 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 05:54:11.582398 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 05:54:11.582411 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 05:54:11.582422 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 05:54:11.582458 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 05:54:11.582471 | orchestrator | 2026-04-11 05:54:11.582496 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-11 05:54:11.582508 | orchestrator | Saturday 11 April 2026 05:53:12 +0000 (0:00:02.508) 0:43:09.135 ******** 2026-04-11 05:54:11.582519 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.582531 | orchestrator | 2026-04-11 05:54:11.582542 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-11 05:54:11.582553 | orchestrator | Saturday 11 April 2026 05:53:13 +0000 (0:00:00.854) 0:43:09.989 ******** 2026-04-11 05:54:11.582565 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-04-11 05:54:11.582576 | orchestrator | 2026-04-11 05:54:11.582588 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-11 05:54:11.582599 | orchestrator | Saturday 11 April 2026 05:53:14 +0000 (0:00:01.138) 0:43:11.128 ******** 2026-04-11 05:54:11.582611 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 05:54:11.582622 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-11 05:54:11.582634 | orchestrator | 2026-04-11 05:54:11.582645 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-11 05:54:11.582657 | orchestrator | Saturday 11 April 2026 05:53:16 +0000 (0:00:01.908) 0:43:13.036 ******** 2026-04-11 05:54:11.582669 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:54:11.582680 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 05:54:11.582692 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 05:54:11.582704 | orchestrator | 2026-04-11 05:54:11.582715 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-11 05:54:11.582727 | orchestrator | Saturday 11 April 2026 05:53:19 +0000 (0:00:03.168) 0:43:16.205 ******** 2026-04-11 05:54:11.582739 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-11 05:54:11.582751 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 05:54:11.582763 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:54:11.582774 | orchestrator | 2026-04-11 05:54:11.582786 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-11 05:54:11.582801 | orchestrator | Saturday 11 April 2026 05:53:21 +0000 (0:00:01.623) 0:43:17.828 ******** 2026-04-11 05:54:11.582820 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.582836 | orchestrator | 2026-04-11 05:54:11.582855 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-11 05:54:11.582875 | orchestrator | Saturday 11 April 2026 05:53:22 +0000 (0:00:00.869) 0:43:18.697 ******** 2026-04-11 05:54:11.582893 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.582911 | orchestrator | 2026-04-11 05:54:11.582923 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-11 05:54:11.582933 | orchestrator | Saturday 11 April 2026 05:53:23 +0000 (0:00:00.768) 0:43:19.466 ******** 2026-04-11 05:54:11.582944 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.582955 | orchestrator | 2026-04-11 05:54:11.582966 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-11 05:54:11.582976 | orchestrator | Saturday 11 April 2026 05:53:24 +0000 (0:00:00.793) 0:43:20.260 ******** 2026-04-11 05:54:11.582987 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-04-11 05:54:11.582998 | orchestrator | 2026-04-11 05:54:11.583009 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-11 05:54:11.583019 | orchestrator | Saturday 11 April 2026 05:53:25 +0000 (0:00:01.094) 0:43:21.354 ******** 2026-04-11 05:54:11.583030 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:54:11.583041 | orchestrator | 2026-04-11 05:54:11.583052 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-11 05:54:11.583062 | orchestrator | Saturday 11 April 2026 05:53:26 +0000 (0:00:01.439) 0:43:22.793 ******** 2026-04-11 05:54:11.583082 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:54:11.583093 | orchestrator | 2026-04-11 05:54:11.583104 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-11 05:54:11.583115 | orchestrator | Saturday 11 April 2026 05:53:30 +0000 (0:00:03.496) 0:43:26.290 ******** 2026-04-11 05:54:11.583125 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-04-11 05:54:11.583136 | orchestrator | 2026-04-11 05:54:11.583147 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-11 05:54:11.583157 | orchestrator | Saturday 11 April 2026 05:53:31 +0000 (0:00:01.335) 0:43:27.626 ******** 2026-04-11 05:54:11.583168 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:54:11.583179 | orchestrator | 2026-04-11 05:54:11.583190 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-11 05:54:11.583201 | orchestrator | Saturday 11 April 2026 05:53:33 +0000 (0:00:02.052) 0:43:29.679 ******** 2026-04-11 05:54:11.583212 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:54:11.583223 | orchestrator | 2026-04-11 05:54:11.583234 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-11 05:54:11.583304 | orchestrator | Saturday 11 April 2026 05:53:35 +0000 (0:00:01.992) 0:43:31.671 ******** 2026-04-11 05:54:11.583319 | orchestrator | ok: [testbed-node-4] 2026-04-11 05:54:11.583330 | orchestrator | 2026-04-11 05:54:11.583341 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-11 05:54:11.583351 | orchestrator | Saturday 11 April 2026 05:53:37 +0000 (0:00:02.278) 0:43:33.950 ******** 2026-04-11 05:54:11.583362 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.583373 | orchestrator | 2026-04-11 05:54:11.583384 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-11 05:54:11.583394 | orchestrator | Saturday 11 April 2026 05:53:38 +0000 (0:00:01.214) 0:43:35.164 ******** 2026-04-11 05:54:11.583405 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.583416 | orchestrator | 2026-04-11 05:54:11.583426 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-11 05:54:11.583437 | orchestrator | Saturday 11 April 2026 05:53:40 +0000 (0:00:01.183) 0:43:36.348 ******** 2026-04-11 05:54:11.583448 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-11 05:54:11.583465 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-11 05:54:11.583477 | orchestrator | 2026-04-11 05:54:11.583487 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-11 05:54:11.583498 | orchestrator | Saturday 11 April 2026 05:53:41 +0000 (0:00:01.807) 0:43:38.155 ******** 2026-04-11 05:54:11.583509 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-04-11 05:54:11.583520 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-11 05:54:11.583530 | orchestrator | 2026-04-11 05:54:11.583541 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-11 05:54:11.583551 | orchestrator | Saturday 11 April 2026 05:53:45 +0000 (0:00:03.081) 0:43:41.237 ******** 2026-04-11 05:54:11.583562 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-04-11 05:54:11.583573 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-11 05:54:11.583583 | orchestrator | 2026-04-11 05:54:11.583594 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-11 05:54:11.583605 | orchestrator | Saturday 11 April 2026 05:53:49 +0000 (0:00:04.466) 0:43:45.704 ******** 2026-04-11 05:54:11.583615 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.583626 | orchestrator | 2026-04-11 05:54:11.583637 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-11 05:54:11.583647 | orchestrator | Saturday 11 April 2026 05:53:50 +0000 (0:00:00.920) 0:43:46.624 ******** 2026-04-11 05:54:11.583658 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.583668 | orchestrator | 2026-04-11 05:54:11.583679 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-11 05:54:11.583689 | orchestrator | Saturday 11 April 2026 05:53:51 +0000 (0:00:00.920) 0:43:47.545 ******** 2026-04-11 05:54:11.583708 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.583719 | orchestrator | 2026-04-11 05:54:11.583730 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-11 05:54:11.583741 | orchestrator | Saturday 11 April 2026 05:53:52 +0000 (0:00:00.975) 0:43:48.520 ******** 2026-04-11 05:54:11.583751 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.583762 | orchestrator | 2026-04-11 05:54:11.583772 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-11 05:54:11.583783 | orchestrator | Saturday 11 April 2026 05:53:53 +0000 (0:00:00.796) 0:43:49.316 ******** 2026-04-11 05:54:11.583794 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.583804 | orchestrator | 2026-04-11 05:54:11.583815 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-11 05:54:11.583826 | orchestrator | Saturday 11 April 2026 05:53:53 +0000 (0:00:00.768) 0:43:50.085 ******** 2026-04-11 05:54:11.583837 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-11 05:54:11.583847 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-11 05:54:11.583858 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-11 05:54:11.583869 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-11 05:54:11.583887 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:54:11.583906 | orchestrator | 2026-04-11 05:54:11.583924 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 05:54:11.583944 | orchestrator | Saturday 11 April 2026 05:54:07 +0000 (0:00:13.745) 0:44:03.830 ******** 2026-04-11 05:54:11.583963 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.583983 | orchestrator | 2026-04-11 05:54:11.584002 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 05:54:11.584019 | orchestrator | Saturday 11 April 2026 05:54:08 +0000 (0:00:00.797) 0:44:04.627 ******** 2026-04-11 05:54:11.584037 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.584053 | orchestrator | 2026-04-11 05:54:11.584071 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 05:54:11.584089 | orchestrator | Saturday 11 April 2026 05:54:09 +0000 (0:00:00.820) 0:44:05.448 ******** 2026-04-11 05:54:11.584106 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.584124 | orchestrator | 2026-04-11 05:54:11.584142 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 05:54:11.584160 | orchestrator | Saturday 11 April 2026 05:54:10 +0000 (0:00:00.812) 0:44:06.261 ******** 2026-04-11 05:54:11.584178 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.584196 | orchestrator | 2026-04-11 05:54:11.584214 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 05:54:11.584232 | orchestrator | Saturday 11 April 2026 05:54:10 +0000 (0:00:00.768) 0:44:07.029 ******** 2026-04-11 05:54:11.584249 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:11.584297 | orchestrator | 2026-04-11 05:54:11.584316 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-11 05:54:11.584348 | orchestrator | Saturday 11 April 2026 05:54:11 +0000 (0:00:00.756) 0:44:07.785 ******** 2026-04-11 05:54:35.921496 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:35.921608 | orchestrator | 2026-04-11 05:54:35.921624 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 05:54:35.921636 | orchestrator | Saturday 11 April 2026 05:54:12 +0000 (0:00:00.785) 0:44:08.570 ******** 2026-04-11 05:54:35.921646 | orchestrator | skipping: [testbed-node-4] 2026-04-11 05:54:35.921656 | orchestrator | 2026-04-11 05:54:35.921666 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-11 05:54:35.921675 | orchestrator | 2026-04-11 05:54:35.921685 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 05:54:35.921716 | orchestrator | Saturday 11 April 2026 05:54:13 +0000 (0:00:01.012) 0:44:09.583 ******** 2026-04-11 05:54:35.921726 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-11 05:54:35.921736 | orchestrator | 2026-04-11 05:54:35.921745 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 05:54:35.921768 | orchestrator | Saturday 11 April 2026 05:54:14 +0000 (0:00:01.348) 0:44:10.931 ******** 2026-04-11 05:54:35.921778 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:35.921788 | orchestrator | 2026-04-11 05:54:35.921798 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 05:54:35.921808 | orchestrator | Saturday 11 April 2026 05:54:16 +0000 (0:00:01.506) 0:44:12.437 ******** 2026-04-11 05:54:35.921817 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:35.921827 | orchestrator | 2026-04-11 05:54:35.921836 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 05:54:35.921846 | orchestrator | Saturday 11 April 2026 05:54:17 +0000 (0:00:01.143) 0:44:13.581 ******** 2026-04-11 05:54:35.921855 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:35.921864 | orchestrator | 2026-04-11 05:54:35.921874 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 05:54:35.921884 | orchestrator | Saturday 11 April 2026 05:54:18 +0000 (0:00:01.478) 0:44:15.059 ******** 2026-04-11 05:54:35.921893 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:35.921902 | orchestrator | 2026-04-11 05:54:35.921912 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 05:54:35.921922 | orchestrator | Saturday 11 April 2026 05:54:19 +0000 (0:00:01.125) 0:44:16.185 ******** 2026-04-11 05:54:35.921931 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:35.921941 | orchestrator | 2026-04-11 05:54:35.921950 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 05:54:35.921960 | orchestrator | Saturday 11 April 2026 05:54:21 +0000 (0:00:01.178) 0:44:17.364 ******** 2026-04-11 05:54:35.921969 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:35.921979 | orchestrator | 2026-04-11 05:54:35.921989 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 05:54:35.921999 | orchestrator | Saturday 11 April 2026 05:54:22 +0000 (0:00:01.202) 0:44:18.567 ******** 2026-04-11 05:54:35.922009 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:35.922079 | orchestrator | 2026-04-11 05:54:35.922093 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 05:54:35.922104 | orchestrator | Saturday 11 April 2026 05:54:23 +0000 (0:00:01.181) 0:44:19.749 ******** 2026-04-11 05:54:35.922116 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:35.922127 | orchestrator | 2026-04-11 05:54:35.922138 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 05:54:35.922149 | orchestrator | Saturday 11 April 2026 05:54:24 +0000 (0:00:01.170) 0:44:20.919 ******** 2026-04-11 05:54:35.922160 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:54:35.922172 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:54:35.922183 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:54:35.922194 | orchestrator | 2026-04-11 05:54:35.922205 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 05:54:35.922216 | orchestrator | Saturday 11 April 2026 05:54:26 +0000 (0:00:01.995) 0:44:22.915 ******** 2026-04-11 05:54:35.922228 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:35.922239 | orchestrator | 2026-04-11 05:54:35.922250 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 05:54:35.922285 | orchestrator | Saturday 11 April 2026 05:54:27 +0000 (0:00:01.221) 0:44:24.137 ******** 2026-04-11 05:54:35.922297 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:54:35.922308 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:54:35.922327 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:54:35.922338 | orchestrator | 2026-04-11 05:54:35.922349 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 05:54:35.922361 | orchestrator | Saturday 11 April 2026 05:54:31 +0000 (0:00:03.260) 0:44:27.398 ******** 2026-04-11 05:54:35.922372 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 05:54:35.922383 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 05:54:35.922394 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 05:54:35.922406 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:35.922417 | orchestrator | 2026-04-11 05:54:35.922427 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 05:54:35.922436 | orchestrator | Saturday 11 April 2026 05:54:33 +0000 (0:00:01.841) 0:44:29.240 ******** 2026-04-11 05:54:35.922448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 05:54:35.922476 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 05:54:35.922487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 05:54:35.922497 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:35.922506 | orchestrator | 2026-04-11 05:54:35.922516 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 05:54:35.922526 | orchestrator | Saturday 11 April 2026 05:54:34 +0000 (0:00:01.623) 0:44:30.863 ******** 2026-04-11 05:54:35.922543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:35.922557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:35.922567 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:35.922577 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:35.922586 | orchestrator | 2026-04-11 05:54:35.922596 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 05:54:35.922606 | orchestrator | Saturday 11 April 2026 05:54:35 +0000 (0:00:01.150) 0:44:32.014 ******** 2026-04-11 05:54:35.922618 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 05:54:28.839113', 'end': '2026-04-11 05:54:28.888336', 'delta': '0:00:00.049223', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 05:54:35.922637 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 05:54:29.369564', 'end': '2026-04-11 05:54:29.411722', 'delta': '0:00:00.042158', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 05:54:35.922656 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 05:54:29.921026', 'end': '2026-04-11 05:54:29.961620', 'delta': '0:00:00.040594', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 05:54:54.731949 | orchestrator | 2026-04-11 05:54:54.732048 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 05:54:54.732061 | orchestrator | Saturday 11 April 2026 05:54:36 +0000 (0:00:01.179) 0:44:33.193 ******** 2026-04-11 05:54:54.732069 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:54.732077 | orchestrator | 2026-04-11 05:54:54.732085 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 05:54:54.732094 | orchestrator | Saturday 11 April 2026 05:54:38 +0000 (0:00:01.297) 0:44:34.491 ******** 2026-04-11 05:54:54.732101 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:54.732110 | orchestrator | 2026-04-11 05:54:54.732117 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 05:54:54.732138 | orchestrator | Saturday 11 April 2026 05:54:39 +0000 (0:00:01.271) 0:44:35.762 ******** 2026-04-11 05:54:54.732146 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:54.732154 | orchestrator | 2026-04-11 05:54:54.732161 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 05:54:54.732168 | orchestrator | Saturday 11 April 2026 05:54:40 +0000 (0:00:01.236) 0:44:36.999 ******** 2026-04-11 05:54:54.732175 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:54:54.732183 | orchestrator | 2026-04-11 05:54:54.732190 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:54:54.732197 | orchestrator | Saturday 11 April 2026 05:54:42 +0000 (0:00:01.913) 0:44:38.913 ******** 2026-04-11 05:54:54.732204 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:54.732212 | orchestrator | 2026-04-11 05:54:54.732219 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 05:54:54.732226 | orchestrator | Saturday 11 April 2026 05:54:43 +0000 (0:00:01.170) 0:44:40.084 ******** 2026-04-11 05:54:54.732233 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:54.732241 | orchestrator | 2026-04-11 05:54:54.732248 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 05:54:54.732323 | orchestrator | Saturday 11 April 2026 05:54:45 +0000 (0:00:01.133) 0:44:41.217 ******** 2026-04-11 05:54:54.732332 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:54.732340 | orchestrator | 2026-04-11 05:54:54.732347 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 05:54:54.732354 | orchestrator | Saturday 11 April 2026 05:54:46 +0000 (0:00:01.242) 0:44:42.460 ******** 2026-04-11 05:54:54.732361 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:54.732368 | orchestrator | 2026-04-11 05:54:54.732376 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 05:54:54.732383 | orchestrator | Saturday 11 April 2026 05:54:47 +0000 (0:00:01.164) 0:44:43.624 ******** 2026-04-11 05:54:54.732391 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:54.732398 | orchestrator | 2026-04-11 05:54:54.732405 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 05:54:54.732413 | orchestrator | Saturday 11 April 2026 05:54:48 +0000 (0:00:01.176) 0:44:44.801 ******** 2026-04-11 05:54:54.732420 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:54.732427 | orchestrator | 2026-04-11 05:54:54.732434 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 05:54:54.732442 | orchestrator | Saturday 11 April 2026 05:54:49 +0000 (0:00:01.235) 0:44:46.037 ******** 2026-04-11 05:54:54.732449 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:54.732456 | orchestrator | 2026-04-11 05:54:54.732463 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 05:54:54.732470 | orchestrator | Saturday 11 April 2026 05:54:50 +0000 (0:00:01.125) 0:44:47.163 ******** 2026-04-11 05:54:54.732477 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:54.732485 | orchestrator | 2026-04-11 05:54:54.732492 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 05:54:54.732501 | orchestrator | Saturday 11 April 2026 05:54:52 +0000 (0:00:01.181) 0:44:48.345 ******** 2026-04-11 05:54:54.732509 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:54.732517 | orchestrator | 2026-04-11 05:54:54.732525 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 05:54:54.732534 | orchestrator | Saturday 11 April 2026 05:54:53 +0000 (0:00:01.163) 0:44:49.508 ******** 2026-04-11 05:54:54.732542 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:54:54.732551 | orchestrator | 2026-04-11 05:54:54.732559 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 05:54:54.732567 | orchestrator | Saturday 11 April 2026 05:54:54 +0000 (0:00:01.238) 0:44:50.747 ******** 2026-04-11 05:54:54.732578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:54:54.732605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'uuids': ['9614ebde-9763-41b8-8070-f8f6acc1ef2b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn']}})  2026-04-11 05:54:54.732621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '17a8d280', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:54:54.732637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412']}})  2026-04-11 05:54:54.732647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:54:54.732657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:54:54.732667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 05:54:54.732676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:54:54.732685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ', 'dm-uuid-CRYPT-LUKS2-bdcb2384073e4d9c84ce45a3274a4645-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:54:54.732701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:54:56.063993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'uuids': ['bdcb2384-073e-4d9c-84ce-45a3274a4645'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ']}})  2026-04-11 05:54:56.064124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056']}})  2026-04-11 05:54:56.064153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:54:56.064173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a75c226', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 05:54:56.064240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:54:56.064256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 05:54:56.064339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn', 'dm-uuid-CRYPT-LUKS2-9614ebde976341b88070f8f6acc1ef2b-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 05:54:56.064353 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:54:56.064367 | orchestrator | 2026-04-11 05:54:56.064379 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 05:54:56.064392 | orchestrator | Saturday 11 April 2026 05:54:55 +0000 (0:00:01.401) 0:44:52.148 ******** 2026-04-11 05:54:56.064404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.064418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'uuids': ['9614ebde-9763-41b8-8070-f8f6acc1ef2b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.064431 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '17a8d280', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.064473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184765 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ', 'dm-uuid-CRYPT-LUKS2-bdcb2384073e4d9c84ce45a3274a4645-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'uuids': ['bdcb2384-073e-4d9c-84ce-45a3274a4645'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184875 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056']}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:54:56.184956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a75c226', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:55:25.284919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:55:25.285038 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:55:25.285055 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn', 'dm-uuid-CRYPT-LUKS2-9614ebde976341b88070f8f6acc1ef2b-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 05:55:25.285091 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.285106 | orchestrator | 2026-04-11 05:55:25.285118 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 05:55:25.285139 | orchestrator | Saturday 11 April 2026 05:54:57 +0000 (0:00:01.383) 0:44:53.532 ******** 2026-04-11 05:55:25.285157 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:55:25.285177 | orchestrator | 2026-04-11 05:55:25.285196 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 05:55:25.285215 | orchestrator | Saturday 11 April 2026 05:54:58 +0000 (0:00:01.528) 0:44:55.060 ******** 2026-04-11 05:55:25.285232 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:55:25.285251 | orchestrator | 2026-04-11 05:55:25.285320 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:55:25.285338 | orchestrator | Saturday 11 April 2026 05:55:00 +0000 (0:00:01.178) 0:44:56.239 ******** 2026-04-11 05:55:25.285349 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:55:25.285360 | orchestrator | 2026-04-11 05:55:25.285371 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:55:25.285382 | orchestrator | Saturday 11 April 2026 05:55:01 +0000 (0:00:01.537) 0:44:57.777 ******** 2026-04-11 05:55:25.285392 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.285410 | orchestrator | 2026-04-11 05:55:25.285435 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 05:55:25.285475 | orchestrator | Saturday 11 April 2026 05:55:02 +0000 (0:00:01.168) 0:44:58.946 ******** 2026-04-11 05:55:25.285494 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.285513 | orchestrator | 2026-04-11 05:55:25.285531 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 05:55:25.285547 | orchestrator | Saturday 11 April 2026 05:55:03 +0000 (0:00:01.239) 0:45:00.185 ******** 2026-04-11 05:55:25.285565 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.285584 | orchestrator | 2026-04-11 05:55:25.285603 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 05:55:25.285623 | orchestrator | Saturday 11 April 2026 05:55:05 +0000 (0:00:01.136) 0:45:01.322 ******** 2026-04-11 05:55:25.285643 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-11 05:55:25.285663 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-11 05:55:25.285682 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-11 05:55:25.285700 | orchestrator | 2026-04-11 05:55:25.285717 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 05:55:25.285737 | orchestrator | Saturday 11 April 2026 05:55:07 +0000 (0:00:02.122) 0:45:03.444 ******** 2026-04-11 05:55:25.285755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 05:55:25.285773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 05:55:25.285793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 05:55:25.285813 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.285831 | orchestrator | 2026-04-11 05:55:25.285850 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 05:55:25.285868 | orchestrator | Saturday 11 April 2026 05:55:08 +0000 (0:00:01.171) 0:45:04.616 ******** 2026-04-11 05:55:25.285912 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-11 05:55:25.285925 | orchestrator | 2026-04-11 05:55:25.285937 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:55:25.285950 | orchestrator | Saturday 11 April 2026 05:55:09 +0000 (0:00:01.102) 0:45:05.718 ******** 2026-04-11 05:55:25.285975 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.285986 | orchestrator | 2026-04-11 05:55:25.285997 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:55:25.286007 | orchestrator | Saturday 11 April 2026 05:55:10 +0000 (0:00:01.116) 0:45:06.835 ******** 2026-04-11 05:55:25.286085 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.286098 | orchestrator | 2026-04-11 05:55:25.286109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:55:25.286120 | orchestrator | Saturday 11 April 2026 05:55:11 +0000 (0:00:01.121) 0:45:07.956 ******** 2026-04-11 05:55:25.286130 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.286141 | orchestrator | 2026-04-11 05:55:25.286152 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:55:25.286163 | orchestrator | Saturday 11 April 2026 05:55:12 +0000 (0:00:01.107) 0:45:09.064 ******** 2026-04-11 05:55:25.286173 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:55:25.286184 | orchestrator | 2026-04-11 05:55:25.286195 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:55:25.286206 | orchestrator | Saturday 11 April 2026 05:55:14 +0000 (0:00:01.247) 0:45:10.312 ******** 2026-04-11 05:55:25.286216 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 05:55:25.286227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 05:55:25.286238 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 05:55:25.286249 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.286283 | orchestrator | 2026-04-11 05:55:25.286294 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:55:25.286304 | orchestrator | Saturday 11 April 2026 05:55:15 +0000 (0:00:01.438) 0:45:11.751 ******** 2026-04-11 05:55:25.286315 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 05:55:25.286326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 05:55:25.286337 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 05:55:25.286347 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.286437 | orchestrator | 2026-04-11 05:55:25.286453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:55:25.286464 | orchestrator | Saturday 11 April 2026 05:55:16 +0000 (0:00:01.406) 0:45:13.157 ******** 2026-04-11 05:55:25.286475 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 05:55:25.286485 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 05:55:25.286496 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 05:55:25.286507 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:55:25.286517 | orchestrator | 2026-04-11 05:55:25.286528 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:55:25.286539 | orchestrator | Saturday 11 April 2026 05:55:18 +0000 (0:00:01.406) 0:45:14.564 ******** 2026-04-11 05:55:25.286550 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:55:25.286561 | orchestrator | 2026-04-11 05:55:25.286571 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:55:25.286582 | orchestrator | Saturday 11 April 2026 05:55:19 +0000 (0:00:01.132) 0:45:15.697 ******** 2026-04-11 05:55:25.286593 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 05:55:25.286604 | orchestrator | 2026-04-11 05:55:25.286615 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 05:55:25.286630 | orchestrator | Saturday 11 April 2026 05:55:20 +0000 (0:00:01.367) 0:45:17.065 ******** 2026-04-11 05:55:25.286649 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:55:25.286665 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:55:25.286704 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:55:25.286725 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:55:25.286757 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:55:25.286774 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-11 05:55:25.286790 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:55:25.286805 | orchestrator | 2026-04-11 05:55:25.286822 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 05:55:25.286841 | orchestrator | Saturday 11 April 2026 05:55:23 +0000 (0:00:02.205) 0:45:19.270 ******** 2026-04-11 05:55:25.286858 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:55:25.286877 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:55:25.286894 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:55:25.286913 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 05:55:25.286932 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 05:55:25.286951 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-11 05:55:25.286969 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 05:55:25.286987 | orchestrator | 2026-04-11 05:55:25.287023 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-11 05:56:08.761225 | orchestrator | Saturday 11 April 2026 05:55:25 +0000 (0:00:02.576) 0:45:21.846 ******** 2026-04-11 05:56:08.761397 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.761413 | orchestrator | 2026-04-11 05:56:08.761425 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-11 05:56:08.761434 | orchestrator | Saturday 11 April 2026 05:55:26 +0000 (0:00:01.162) 0:45:23.009 ******** 2026-04-11 05:56:08.761443 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.761452 | orchestrator | 2026-04-11 05:56:08.761462 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-11 05:56:08.761471 | orchestrator | Saturday 11 April 2026 05:55:27 +0000 (0:00:00.810) 0:45:23.820 ******** 2026-04-11 05:56:08.761480 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.761488 | orchestrator | 2026-04-11 05:56:08.761497 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-11 05:56:08.761506 | orchestrator | Saturday 11 April 2026 05:55:28 +0000 (0:00:00.925) 0:45:24.746 ******** 2026-04-11 05:56:08.761515 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-11 05:56:08.761526 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-11 05:56:08.761535 | orchestrator | 2026-04-11 05:56:08.761544 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 05:56:08.761553 | orchestrator | Saturday 11 April 2026 05:55:32 +0000 (0:00:03.984) 0:45:28.731 ******** 2026-04-11 05:56:08.761561 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-11 05:56:08.761571 | orchestrator | 2026-04-11 05:56:08.761580 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 05:56:08.761589 | orchestrator | Saturday 11 April 2026 05:55:33 +0000 (0:00:01.137) 0:45:29.868 ******** 2026-04-11 05:56:08.761597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-11 05:56:08.761606 | orchestrator | 2026-04-11 05:56:08.761615 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 05:56:08.761623 | orchestrator | Saturday 11 April 2026 05:55:34 +0000 (0:00:01.152) 0:45:31.021 ******** 2026-04-11 05:56:08.761632 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.761641 | orchestrator | 2026-04-11 05:56:08.761650 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 05:56:08.761658 | orchestrator | Saturday 11 April 2026 05:55:35 +0000 (0:00:01.147) 0:45:32.169 ******** 2026-04-11 05:56:08.761694 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.761703 | orchestrator | 2026-04-11 05:56:08.761712 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 05:56:08.761723 | orchestrator | Saturday 11 April 2026 05:55:37 +0000 (0:00:01.547) 0:45:33.717 ******** 2026-04-11 05:56:08.761733 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.761743 | orchestrator | 2026-04-11 05:56:08.761753 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 05:56:08.761763 | orchestrator | Saturday 11 April 2026 05:55:39 +0000 (0:00:01.587) 0:45:35.304 ******** 2026-04-11 05:56:08.761774 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.761784 | orchestrator | 2026-04-11 05:56:08.761794 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 05:56:08.761816 | orchestrator | Saturday 11 April 2026 05:55:40 +0000 (0:00:01.530) 0:45:36.834 ******** 2026-04-11 05:56:08.761836 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.761846 | orchestrator | 2026-04-11 05:56:08.761857 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 05:56:08.761867 | orchestrator | Saturday 11 April 2026 05:55:41 +0000 (0:00:01.117) 0:45:37.952 ******** 2026-04-11 05:56:08.761878 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.761889 | orchestrator | 2026-04-11 05:56:08.761899 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 05:56:08.761910 | orchestrator | Saturday 11 April 2026 05:55:42 +0000 (0:00:01.228) 0:45:39.181 ******** 2026-04-11 05:56:08.761920 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.761930 | orchestrator | 2026-04-11 05:56:08.761941 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 05:56:08.761951 | orchestrator | Saturday 11 April 2026 05:55:44 +0000 (0:00:01.117) 0:45:40.299 ******** 2026-04-11 05:56:08.761962 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.761972 | orchestrator | 2026-04-11 05:56:08.762000 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 05:56:08.762010 | orchestrator | Saturday 11 April 2026 05:55:45 +0000 (0:00:01.596) 0:45:41.895 ******** 2026-04-11 05:56:08.762068 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.762079 | orchestrator | 2026-04-11 05:56:08.762089 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 05:56:08.762098 | orchestrator | Saturday 11 April 2026 05:55:47 +0000 (0:00:01.662) 0:45:43.558 ******** 2026-04-11 05:56:08.762117 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762126 | orchestrator | 2026-04-11 05:56:08.762134 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 05:56:08.762143 | orchestrator | Saturday 11 April 2026 05:55:48 +0000 (0:00:00.782) 0:45:44.341 ******** 2026-04-11 05:56:08.762151 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762160 | orchestrator | 2026-04-11 05:56:08.762169 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 05:56:08.762177 | orchestrator | Saturday 11 April 2026 05:55:48 +0000 (0:00:00.769) 0:45:45.110 ******** 2026-04-11 05:56:08.762186 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.762195 | orchestrator | 2026-04-11 05:56:08.762203 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 05:56:08.762212 | orchestrator | Saturday 11 April 2026 05:55:49 +0000 (0:00:00.786) 0:45:45.896 ******** 2026-04-11 05:56:08.762221 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.762229 | orchestrator | 2026-04-11 05:56:08.762238 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 05:56:08.762246 | orchestrator | Saturday 11 April 2026 05:55:50 +0000 (0:00:00.801) 0:45:46.698 ******** 2026-04-11 05:56:08.762270 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.762279 | orchestrator | 2026-04-11 05:56:08.762307 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 05:56:08.762316 | orchestrator | Saturday 11 April 2026 05:55:51 +0000 (0:00:00.824) 0:45:47.522 ******** 2026-04-11 05:56:08.762335 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762344 | orchestrator | 2026-04-11 05:56:08.762352 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 05:56:08.762361 | orchestrator | Saturday 11 April 2026 05:55:52 +0000 (0:00:00.766) 0:45:48.289 ******** 2026-04-11 05:56:08.762370 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762378 | orchestrator | 2026-04-11 05:56:08.762387 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 05:56:08.762395 | orchestrator | Saturday 11 April 2026 05:55:52 +0000 (0:00:00.769) 0:45:49.058 ******** 2026-04-11 05:56:08.762404 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762413 | orchestrator | 2026-04-11 05:56:08.762421 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 05:56:08.762430 | orchestrator | Saturday 11 April 2026 05:55:53 +0000 (0:00:00.836) 0:45:49.894 ******** 2026-04-11 05:56:08.762438 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.762447 | orchestrator | 2026-04-11 05:56:08.762455 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 05:56:08.762464 | orchestrator | Saturday 11 April 2026 05:55:54 +0000 (0:00:00.772) 0:45:50.667 ******** 2026-04-11 05:56:08.762472 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.762481 | orchestrator | 2026-04-11 05:56:08.762489 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 05:56:08.762498 | orchestrator | Saturday 11 April 2026 05:55:55 +0000 (0:00:00.815) 0:45:51.483 ******** 2026-04-11 05:56:08.762507 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762515 | orchestrator | 2026-04-11 05:56:08.762524 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 05:56:08.762532 | orchestrator | Saturday 11 April 2026 05:55:56 +0000 (0:00:00.790) 0:45:52.274 ******** 2026-04-11 05:56:08.762541 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762549 | orchestrator | 2026-04-11 05:56:08.762558 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 05:56:08.762567 | orchestrator | Saturday 11 April 2026 05:55:56 +0000 (0:00:00.775) 0:45:53.049 ******** 2026-04-11 05:56:08.762575 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762584 | orchestrator | 2026-04-11 05:56:08.762593 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 05:56:08.762601 | orchestrator | Saturday 11 April 2026 05:55:57 +0000 (0:00:00.812) 0:45:53.862 ******** 2026-04-11 05:56:08.762610 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762618 | orchestrator | 2026-04-11 05:56:08.762627 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 05:56:08.762636 | orchestrator | Saturday 11 April 2026 05:55:58 +0000 (0:00:00.805) 0:45:54.667 ******** 2026-04-11 05:56:08.762644 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762653 | orchestrator | 2026-04-11 05:56:08.762661 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 05:56:08.762670 | orchestrator | Saturday 11 April 2026 05:55:59 +0000 (0:00:00.788) 0:45:55.456 ******** 2026-04-11 05:56:08.762679 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762687 | orchestrator | 2026-04-11 05:56:08.762696 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 05:56:08.762704 | orchestrator | Saturday 11 April 2026 05:56:00 +0000 (0:00:00.806) 0:45:56.262 ******** 2026-04-11 05:56:08.762713 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762721 | orchestrator | 2026-04-11 05:56:08.762730 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 05:56:08.762740 | orchestrator | Saturday 11 April 2026 05:56:00 +0000 (0:00:00.793) 0:45:57.056 ******** 2026-04-11 05:56:08.762749 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762757 | orchestrator | 2026-04-11 05:56:08.762766 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 05:56:08.762775 | orchestrator | Saturday 11 April 2026 05:56:01 +0000 (0:00:00.841) 0:45:57.897 ******** 2026-04-11 05:56:08.762790 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762798 | orchestrator | 2026-04-11 05:56:08.762807 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 05:56:08.762821 | orchestrator | Saturday 11 April 2026 05:56:02 +0000 (0:00:00.778) 0:45:58.676 ******** 2026-04-11 05:56:08.762830 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762839 | orchestrator | 2026-04-11 05:56:08.762847 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 05:56:08.762856 | orchestrator | Saturday 11 April 2026 05:56:03 +0000 (0:00:00.920) 0:45:59.597 ******** 2026-04-11 05:56:08.762864 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762873 | orchestrator | 2026-04-11 05:56:08.762881 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 05:56:08.762890 | orchestrator | Saturday 11 April 2026 05:56:04 +0000 (0:00:00.819) 0:46:00.417 ******** 2026-04-11 05:56:08.762898 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:08.762907 | orchestrator | 2026-04-11 05:56:08.762916 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 05:56:08.762924 | orchestrator | Saturday 11 April 2026 05:56:04 +0000 (0:00:00.787) 0:46:01.204 ******** 2026-04-11 05:56:08.762933 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.762941 | orchestrator | 2026-04-11 05:56:08.762950 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 05:56:08.762959 | orchestrator | Saturday 11 April 2026 05:56:06 +0000 (0:00:01.603) 0:46:02.807 ******** 2026-04-11 05:56:08.762967 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:08.762976 | orchestrator | 2026-04-11 05:56:08.762984 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 05:56:08.762993 | orchestrator | Saturday 11 April 2026 05:56:08 +0000 (0:00:01.929) 0:46:04.737 ******** 2026-04-11 05:56:08.763001 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-11 05:56:08.763010 | orchestrator | 2026-04-11 05:56:08.763025 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 05:56:53.432333 | orchestrator | Saturday 11 April 2026 05:56:09 +0000 (0:00:01.164) 0:46:05.902 ******** 2026-04-11 05:56:53.432464 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.432484 | orchestrator | 2026-04-11 05:56:53.432498 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 05:56:53.432509 | orchestrator | Saturday 11 April 2026 05:56:10 +0000 (0:00:01.198) 0:46:07.100 ******** 2026-04-11 05:56:53.432521 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.432532 | orchestrator | 2026-04-11 05:56:53.432543 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 05:56:53.432554 | orchestrator | Saturday 11 April 2026 05:56:12 +0000 (0:00:01.153) 0:46:08.254 ******** 2026-04-11 05:56:53.432565 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 05:56:53.432576 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 05:56:53.432588 | orchestrator | 2026-04-11 05:56:53.432599 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 05:56:53.432610 | orchestrator | Saturday 11 April 2026 05:56:13 +0000 (0:00:01.927) 0:46:10.181 ******** 2026-04-11 05:56:53.432620 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:53.432633 | orchestrator | 2026-04-11 05:56:53.432644 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 05:56:53.432654 | orchestrator | Saturday 11 April 2026 05:56:15 +0000 (0:00:01.443) 0:46:11.624 ******** 2026-04-11 05:56:53.432665 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.432676 | orchestrator | 2026-04-11 05:56:53.432687 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 05:56:53.432698 | orchestrator | Saturday 11 April 2026 05:56:16 +0000 (0:00:01.136) 0:46:12.762 ******** 2026-04-11 05:56:53.432708 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.432746 | orchestrator | 2026-04-11 05:56:53.432758 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 05:56:53.432768 | orchestrator | Saturday 11 April 2026 05:56:17 +0000 (0:00:00.859) 0:46:13.621 ******** 2026-04-11 05:56:53.432779 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.432790 | orchestrator | 2026-04-11 05:56:53.432801 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 05:56:53.432811 | orchestrator | Saturday 11 April 2026 05:56:18 +0000 (0:00:00.827) 0:46:14.449 ******** 2026-04-11 05:56:53.432822 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-11 05:56:53.432835 | orchestrator | 2026-04-11 05:56:53.432854 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 05:56:53.432872 | orchestrator | Saturday 11 April 2026 05:56:19 +0000 (0:00:01.116) 0:46:15.566 ******** 2026-04-11 05:56:53.432892 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:53.432911 | orchestrator | 2026-04-11 05:56:53.432929 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 05:56:53.432941 | orchestrator | Saturday 11 April 2026 05:56:21 +0000 (0:00:01.770) 0:46:17.336 ******** 2026-04-11 05:56:53.432952 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 05:56:53.432963 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 05:56:53.432974 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 05:56:53.432985 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.432996 | orchestrator | 2026-04-11 05:56:53.433007 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 05:56:53.433017 | orchestrator | Saturday 11 April 2026 05:56:22 +0000 (0:00:01.146) 0:46:18.483 ******** 2026-04-11 05:56:53.433028 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433039 | orchestrator | 2026-04-11 05:56:53.433053 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 05:56:53.433076 | orchestrator | Saturday 11 April 2026 05:56:23 +0000 (0:00:01.178) 0:46:19.662 ******** 2026-04-11 05:56:53.433102 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433119 | orchestrator | 2026-04-11 05:56:53.433136 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 05:56:53.433173 | orchestrator | Saturday 11 April 2026 05:56:24 +0000 (0:00:01.177) 0:46:20.840 ******** 2026-04-11 05:56:53.433189 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433206 | orchestrator | 2026-04-11 05:56:53.433223 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 05:56:53.433239 | orchestrator | Saturday 11 April 2026 05:56:25 +0000 (0:00:01.179) 0:46:22.019 ******** 2026-04-11 05:56:53.433302 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433320 | orchestrator | 2026-04-11 05:56:53.433337 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 05:56:53.433355 | orchestrator | Saturday 11 April 2026 05:56:27 +0000 (0:00:01.195) 0:46:23.215 ******** 2026-04-11 05:56:53.433372 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433390 | orchestrator | 2026-04-11 05:56:53.433408 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 05:56:53.433425 | orchestrator | Saturday 11 April 2026 05:56:27 +0000 (0:00:00.806) 0:46:24.022 ******** 2026-04-11 05:56:53.433443 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:53.433460 | orchestrator | 2026-04-11 05:56:53.433479 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 05:56:53.433498 | orchestrator | Saturday 11 April 2026 05:56:29 +0000 (0:00:02.154) 0:46:26.177 ******** 2026-04-11 05:56:53.433510 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:53.433521 | orchestrator | 2026-04-11 05:56:53.433531 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 05:56:53.433542 | orchestrator | Saturday 11 April 2026 05:56:30 +0000 (0:00:00.815) 0:46:26.992 ******** 2026-04-11 05:56:53.433566 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-11 05:56:53.433577 | orchestrator | 2026-04-11 05:56:53.433610 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 05:56:53.433621 | orchestrator | Saturday 11 April 2026 05:56:32 +0000 (0:00:01.255) 0:46:28.248 ******** 2026-04-11 05:56:53.433632 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433643 | orchestrator | 2026-04-11 05:56:53.433653 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 05:56:53.433664 | orchestrator | Saturday 11 April 2026 05:56:33 +0000 (0:00:01.163) 0:46:29.411 ******** 2026-04-11 05:56:53.433675 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433686 | orchestrator | 2026-04-11 05:56:53.433696 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 05:56:53.433707 | orchestrator | Saturday 11 April 2026 05:56:34 +0000 (0:00:01.175) 0:46:30.587 ******** 2026-04-11 05:56:53.433718 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433729 | orchestrator | 2026-04-11 05:56:53.433740 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 05:56:53.433750 | orchestrator | Saturday 11 April 2026 05:56:35 +0000 (0:00:01.212) 0:46:31.800 ******** 2026-04-11 05:56:53.433761 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433772 | orchestrator | 2026-04-11 05:56:53.433783 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 05:56:53.433793 | orchestrator | Saturday 11 April 2026 05:56:36 +0000 (0:00:01.201) 0:46:33.001 ******** 2026-04-11 05:56:53.433804 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433815 | orchestrator | 2026-04-11 05:56:53.433825 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 05:56:53.433836 | orchestrator | Saturday 11 April 2026 05:56:37 +0000 (0:00:01.133) 0:46:34.135 ******** 2026-04-11 05:56:53.433847 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433858 | orchestrator | 2026-04-11 05:56:53.433869 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 05:56:53.433879 | orchestrator | Saturday 11 April 2026 05:56:39 +0000 (0:00:01.141) 0:46:35.276 ******** 2026-04-11 05:56:53.433890 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433901 | orchestrator | 2026-04-11 05:56:53.433912 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 05:56:53.433922 | orchestrator | Saturday 11 April 2026 05:56:40 +0000 (0:00:01.147) 0:46:36.424 ******** 2026-04-11 05:56:53.433933 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:56:53.433944 | orchestrator | 2026-04-11 05:56:53.433955 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 05:56:53.433965 | orchestrator | Saturday 11 April 2026 05:56:41 +0000 (0:00:01.225) 0:46:37.649 ******** 2026-04-11 05:56:53.433976 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:56:53.433987 | orchestrator | 2026-04-11 05:56:53.433998 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 05:56:53.434008 | orchestrator | Saturday 11 April 2026 05:56:42 +0000 (0:00:00.808) 0:46:38.459 ******** 2026-04-11 05:56:53.434086 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-11 05:56:53.434098 | orchestrator | 2026-04-11 05:56:53.434110 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 05:56:53.434120 | orchestrator | Saturday 11 April 2026 05:56:43 +0000 (0:00:01.157) 0:46:39.616 ******** 2026-04-11 05:56:53.434131 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-11 05:56:53.434143 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-11 05:56:53.434154 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-11 05:56:53.434165 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-11 05:56:53.434176 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-11 05:56:53.434195 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-11 05:56:53.434206 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-11 05:56:53.434217 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-11 05:56:53.434228 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 05:56:53.434239 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 05:56:53.434272 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 05:56:53.434291 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 05:56:53.434303 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 05:56:53.434314 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 05:56:53.434325 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-11 05:56:53.434336 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-11 05:56:53.434347 | orchestrator | 2026-04-11 05:56:53.434358 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 05:56:53.434369 | orchestrator | Saturday 11 April 2026 05:56:49 +0000 (0:00:06.309) 0:46:45.925 ******** 2026-04-11 05:56:53.434379 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-11 05:56:53.434390 | orchestrator | 2026-04-11 05:56:53.434401 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 05:56:53.434412 | orchestrator | Saturday 11 April 2026 05:56:50 +0000 (0:00:01.271) 0:46:47.197 ******** 2026-04-11 05:56:53.434423 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 05:56:53.434435 | orchestrator | 2026-04-11 05:56:53.434446 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 05:56:53.434457 | orchestrator | Saturday 11 April 2026 05:56:52 +0000 (0:00:01.487) 0:46:48.684 ******** 2026-04-11 05:56:53.434468 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 05:56:53.434479 | orchestrator | 2026-04-11 05:56:53.434499 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 05:57:33.870407 | orchestrator | Saturday 11 April 2026 05:56:54 +0000 (0:00:01.578) 0:46:50.263 ******** 2026-04-11 05:57:33.870558 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.870590 | orchestrator | 2026-04-11 05:57:33.870610 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 05:57:33.870629 | orchestrator | Saturday 11 April 2026 05:56:54 +0000 (0:00:00.801) 0:46:51.065 ******** 2026-04-11 05:57:33.870646 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.870662 | orchestrator | 2026-04-11 05:57:33.870680 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 05:57:33.870698 | orchestrator | Saturday 11 April 2026 05:56:55 +0000 (0:00:00.804) 0:46:51.870 ******** 2026-04-11 05:57:33.870715 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.870732 | orchestrator | 2026-04-11 05:57:33.870749 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 05:57:33.870767 | orchestrator | Saturday 11 April 2026 05:56:56 +0000 (0:00:00.771) 0:46:52.642 ******** 2026-04-11 05:57:33.870785 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.870804 | orchestrator | 2026-04-11 05:57:33.870822 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 05:57:33.870842 | orchestrator | Saturday 11 April 2026 05:56:57 +0000 (0:00:00.777) 0:46:53.419 ******** 2026-04-11 05:57:33.870860 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.870880 | orchestrator | 2026-04-11 05:57:33.870900 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 05:57:33.870923 | orchestrator | Saturday 11 April 2026 05:56:57 +0000 (0:00:00.775) 0:46:54.195 ******** 2026-04-11 05:57:33.870976 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.870998 | orchestrator | 2026-04-11 05:57:33.871019 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 05:57:33.871038 | orchestrator | Saturday 11 April 2026 05:56:58 +0000 (0:00:00.786) 0:46:54.981 ******** 2026-04-11 05:57:33.871057 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.871075 | orchestrator | 2026-04-11 05:57:33.871092 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 05:57:33.871110 | orchestrator | Saturday 11 April 2026 05:56:59 +0000 (0:00:00.813) 0:46:55.795 ******** 2026-04-11 05:57:33.871128 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.871147 | orchestrator | 2026-04-11 05:57:33.871166 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 05:57:33.871184 | orchestrator | Saturday 11 April 2026 05:57:00 +0000 (0:00:00.823) 0:46:56.618 ******** 2026-04-11 05:57:33.871203 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.871222 | orchestrator | 2026-04-11 05:57:33.871241 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 05:57:33.871293 | orchestrator | Saturday 11 April 2026 05:57:01 +0000 (0:00:00.859) 0:46:57.477 ******** 2026-04-11 05:57:33.871313 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.871333 | orchestrator | 2026-04-11 05:57:33.871351 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 05:57:33.871370 | orchestrator | Saturday 11 April 2026 05:57:02 +0000 (0:00:00.838) 0:46:58.316 ******** 2026-04-11 05:57:33.871390 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:57:33.871409 | orchestrator | 2026-04-11 05:57:33.871427 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 05:57:33.871445 | orchestrator | Saturday 11 April 2026 05:57:03 +0000 (0:00:00.915) 0:46:59.232 ******** 2026-04-11 05:57:33.871467 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-11 05:57:33.871485 | orchestrator | 2026-04-11 05:57:33.871504 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 05:57:33.871523 | orchestrator | Saturday 11 April 2026 05:57:07 +0000 (0:00:04.250) 0:47:03.483 ******** 2026-04-11 05:57:33.871541 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 05:57:33.871559 | orchestrator | 2026-04-11 05:57:33.871595 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 05:57:33.871615 | orchestrator | Saturday 11 April 2026 05:57:08 +0000 (0:00:00.816) 0:47:04.299 ******** 2026-04-11 05:57:33.871636 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-11 05:57:33.871659 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-11 05:57:33.871679 | orchestrator | 2026-04-11 05:57:33.871696 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 05:57:33.871714 | orchestrator | Saturday 11 April 2026 05:57:15 +0000 (0:00:07.259) 0:47:11.559 ******** 2026-04-11 05:57:33.871732 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.871750 | orchestrator | 2026-04-11 05:57:33.871768 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 05:57:33.871787 | orchestrator | Saturday 11 April 2026 05:57:16 +0000 (0:00:00.799) 0:47:12.359 ******** 2026-04-11 05:57:33.871803 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.871836 | orchestrator | 2026-04-11 05:57:33.871882 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 05:57:33.871899 | orchestrator | Saturday 11 April 2026 05:57:16 +0000 (0:00:00.760) 0:47:13.119 ******** 2026-04-11 05:57:33.871915 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.871932 | orchestrator | 2026-04-11 05:57:33.871950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 05:57:33.871969 | orchestrator | Saturday 11 April 2026 05:57:17 +0000 (0:00:00.819) 0:47:13.938 ******** 2026-04-11 05:57:33.871988 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.872007 | orchestrator | 2026-04-11 05:57:33.872026 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 05:57:33.872046 | orchestrator | Saturday 11 April 2026 05:57:18 +0000 (0:00:00.832) 0:47:14.771 ******** 2026-04-11 05:57:33.872064 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.872083 | orchestrator | 2026-04-11 05:57:33.872100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 05:57:33.872120 | orchestrator | Saturday 11 April 2026 05:57:19 +0000 (0:00:00.859) 0:47:15.631 ******** 2026-04-11 05:57:33.872140 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:57:33.872158 | orchestrator | 2026-04-11 05:57:33.872176 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 05:57:33.872196 | orchestrator | Saturday 11 April 2026 05:57:20 +0000 (0:00:00.889) 0:47:16.520 ******** 2026-04-11 05:57:33.872216 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 05:57:33.872236 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 05:57:33.872308 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 05:57:33.872329 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.872348 | orchestrator | 2026-04-11 05:57:33.872367 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 05:57:33.872387 | orchestrator | Saturday 11 April 2026 05:57:21 +0000 (0:00:01.405) 0:47:17.926 ******** 2026-04-11 05:57:33.872403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 05:57:33.872420 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 05:57:33.872437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 05:57:33.872456 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.872475 | orchestrator | 2026-04-11 05:57:33.872494 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 05:57:33.872514 | orchestrator | Saturday 11 April 2026 05:57:23 +0000 (0:00:01.508) 0:47:19.435 ******** 2026-04-11 05:57:33.872533 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 05:57:33.872550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 05:57:33.872568 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 05:57:33.872584 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.872600 | orchestrator | 2026-04-11 05:57:33.872615 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 05:57:33.872631 | orchestrator | Saturday 11 April 2026 05:57:24 +0000 (0:00:01.068) 0:47:20.503 ******** 2026-04-11 05:57:33.872647 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:57:33.872663 | orchestrator | 2026-04-11 05:57:33.872679 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 05:57:33.872695 | orchestrator | Saturday 11 April 2026 05:57:25 +0000 (0:00:00.830) 0:47:21.334 ******** 2026-04-11 05:57:33.872712 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 05:57:33.872729 | orchestrator | 2026-04-11 05:57:33.872748 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 05:57:33.872763 | orchestrator | Saturday 11 April 2026 05:57:26 +0000 (0:00:01.004) 0:47:22.339 ******** 2026-04-11 05:57:33.872779 | orchestrator | changed: [testbed-node-5] 2026-04-11 05:57:33.872813 | orchestrator | 2026-04-11 05:57:33.872829 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-11 05:57:33.872844 | orchestrator | Saturday 11 April 2026 05:57:27 +0000 (0:00:01.411) 0:47:23.750 ******** 2026-04-11 05:57:33.872860 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:57:33.872876 | orchestrator | 2026-04-11 05:57:33.872892 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-11 05:57:33.872919 | orchestrator | Saturday 11 April 2026 05:57:28 +0000 (0:00:00.780) 0:47:24.531 ******** 2026-04-11 05:57:33.872936 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 05:57:33.872954 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 05:57:33.872970 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 05:57:33.872986 | orchestrator | 2026-04-11 05:57:33.873004 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-11 05:57:33.873020 | orchestrator | Saturday 11 April 2026 05:57:30 +0000 (0:00:01.682) 0:47:26.213 ******** 2026-04-11 05:57:33.873036 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-04-11 05:57:33.873053 | orchestrator | 2026-04-11 05:57:33.873069 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-11 05:57:33.873085 | orchestrator | Saturday 11 April 2026 05:57:31 +0000 (0:00:01.125) 0:47:27.339 ******** 2026-04-11 05:57:33.873101 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.873116 | orchestrator | 2026-04-11 05:57:33.873133 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-11 05:57:33.873150 | orchestrator | Saturday 11 April 2026 05:57:32 +0000 (0:00:01.138) 0:47:28.477 ******** 2026-04-11 05:57:33.873166 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:57:33.873182 | orchestrator | 2026-04-11 05:57:33.873199 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-11 05:57:33.873215 | orchestrator | Saturday 11 April 2026 05:57:33 +0000 (0:00:01.138) 0:47:29.616 ******** 2026-04-11 05:57:33.873231 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:57:33.873312 | orchestrator | 2026-04-11 05:57:33.873351 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-11 05:58:40.990135 | orchestrator | Saturday 11 April 2026 05:57:34 +0000 (0:00:01.465) 0:47:31.082 ******** 2026-04-11 05:58:40.990317 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:58:40.990338 | orchestrator | 2026-04-11 05:58:40.990351 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-11 05:58:40.990362 | orchestrator | Saturday 11 April 2026 05:57:36 +0000 (0:00:01.213) 0:47:32.295 ******** 2026-04-11 05:58:40.990374 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-11 05:58:40.990386 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-11 05:58:40.990398 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-11 05:58:40.990410 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-11 05:58:40.990421 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-11 05:58:40.990432 | orchestrator | 2026-04-11 05:58:40.990443 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-11 05:58:40.990453 | orchestrator | Saturday 11 April 2026 05:57:38 +0000 (0:00:02.500) 0:47:34.796 ******** 2026-04-11 05:58:40.990464 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.990476 | orchestrator | 2026-04-11 05:58:40.990487 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-11 05:58:40.990498 | orchestrator | Saturday 11 April 2026 05:57:39 +0000 (0:00:00.769) 0:47:35.565 ******** 2026-04-11 05:58:40.990509 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-04-11 05:58:40.990519 | orchestrator | 2026-04-11 05:58:40.990556 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-11 05:58:40.990567 | orchestrator | Saturday 11 April 2026 05:57:40 +0000 (0:00:01.113) 0:47:36.679 ******** 2026-04-11 05:58:40.990578 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-11 05:58:40.990588 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-11 05:58:40.990599 | orchestrator | 2026-04-11 05:58:40.990610 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-11 05:58:40.990622 | orchestrator | Saturday 11 April 2026 05:57:42 +0000 (0:00:01.810) 0:47:38.490 ******** 2026-04-11 05:58:40.990634 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 05:58:40.990646 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 05:58:40.990659 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 05:58:40.990683 | orchestrator | 2026-04-11 05:58:40.990696 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-11 05:58:40.990708 | orchestrator | Saturday 11 April 2026 05:57:45 +0000 (0:00:03.267) 0:47:41.757 ******** 2026-04-11 05:58:40.990720 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-11 05:58:40.990733 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 05:58:40.990746 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:58:40.990758 | orchestrator | 2026-04-11 05:58:40.990771 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-11 05:58:40.990783 | orchestrator | Saturday 11 April 2026 05:57:47 +0000 (0:00:01.689) 0:47:43.446 ******** 2026-04-11 05:58:40.990795 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.990807 | orchestrator | 2026-04-11 05:58:40.990819 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-11 05:58:40.990831 | orchestrator | Saturday 11 April 2026 05:57:48 +0000 (0:00:00.880) 0:47:44.327 ******** 2026-04-11 05:58:40.990844 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.990856 | orchestrator | 2026-04-11 05:58:40.990868 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-11 05:58:40.990880 | orchestrator | Saturday 11 April 2026 05:57:48 +0000 (0:00:00.753) 0:47:45.081 ******** 2026-04-11 05:58:40.990893 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.990905 | orchestrator | 2026-04-11 05:58:40.990917 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-11 05:58:40.990944 | orchestrator | Saturday 11 April 2026 05:57:49 +0000 (0:00:00.797) 0:47:45.878 ******** 2026-04-11 05:58:40.990957 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-04-11 05:58:40.990970 | orchestrator | 2026-04-11 05:58:40.990981 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-11 05:58:40.990991 | orchestrator | Saturday 11 April 2026 05:57:50 +0000 (0:00:01.127) 0:47:47.006 ******** 2026-04-11 05:58:40.991002 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:58:40.991012 | orchestrator | 2026-04-11 05:58:40.991023 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-11 05:58:40.991034 | orchestrator | Saturday 11 April 2026 05:57:52 +0000 (0:00:01.566) 0:47:48.573 ******** 2026-04-11 05:58:40.991044 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:58:40.991055 | orchestrator | 2026-04-11 05:58:40.991065 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-11 05:58:40.991076 | orchestrator | Saturday 11 April 2026 05:57:55 +0000 (0:00:03.385) 0:47:51.958 ******** 2026-04-11 05:58:40.991086 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-04-11 05:58:40.991097 | orchestrator | 2026-04-11 05:58:40.991108 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-11 05:58:40.991118 | orchestrator | Saturday 11 April 2026 05:57:56 +0000 (0:00:01.147) 0:47:53.106 ******** 2026-04-11 05:58:40.991128 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:58:40.991139 | orchestrator | 2026-04-11 05:58:40.991150 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-11 05:58:40.991168 | orchestrator | Saturday 11 April 2026 05:57:58 +0000 (0:00:02.020) 0:47:55.126 ******** 2026-04-11 05:58:40.991179 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:58:40.991190 | orchestrator | 2026-04-11 05:58:40.991200 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-11 05:58:40.991227 | orchestrator | Saturday 11 April 2026 05:58:00 +0000 (0:00:01.966) 0:47:57.093 ******** 2026-04-11 05:58:40.991239 | orchestrator | ok: [testbed-node-5] 2026-04-11 05:58:40.991269 | orchestrator | 2026-04-11 05:58:40.991281 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-11 05:58:40.991291 | orchestrator | Saturday 11 April 2026 05:58:03 +0000 (0:00:02.239) 0:47:59.332 ******** 2026-04-11 05:58:40.991302 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991313 | orchestrator | 2026-04-11 05:58:40.991323 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-11 05:58:40.991334 | orchestrator | Saturday 11 April 2026 05:58:04 +0000 (0:00:01.153) 0:48:00.486 ******** 2026-04-11 05:58:40.991344 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991355 | orchestrator | 2026-04-11 05:58:40.991366 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-11 05:58:40.991376 | orchestrator | Saturday 11 April 2026 05:58:05 +0000 (0:00:01.186) 0:48:01.673 ******** 2026-04-11 05:58:40.991387 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-11 05:58:40.991398 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-11 05:58:40.991408 | orchestrator | 2026-04-11 05:58:40.991419 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-11 05:58:40.991429 | orchestrator | Saturday 11 April 2026 05:58:07 +0000 (0:00:01.816) 0:48:03.490 ******** 2026-04-11 05:58:40.991440 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-11 05:58:40.991451 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-04-11 05:58:40.991461 | orchestrator | 2026-04-11 05:58:40.991472 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-11 05:58:40.991482 | orchestrator | Saturday 11 April 2026 05:58:10 +0000 (0:00:02.878) 0:48:06.368 ******** 2026-04-11 05:58:40.991493 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-11 05:58:40.991504 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-04-11 05:58:40.991515 | orchestrator | 2026-04-11 05:58:40.991525 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-11 05:58:40.991536 | orchestrator | Saturday 11 April 2026 05:58:14 +0000 (0:00:04.385) 0:48:10.754 ******** 2026-04-11 05:58:40.991546 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991557 | orchestrator | 2026-04-11 05:58:40.991568 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-11 05:58:40.991578 | orchestrator | Saturday 11 April 2026 05:58:15 +0000 (0:00:00.898) 0:48:11.652 ******** 2026-04-11 05:58:40.991589 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-11 05:58:40.991600 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:58:40.991610 | orchestrator | 2026-04-11 05:58:40.991621 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-11 05:58:40.991631 | orchestrator | Saturday 11 April 2026 05:58:28 +0000 (0:00:13.373) 0:48:25.026 ******** 2026-04-11 05:58:40.991642 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991652 | orchestrator | 2026-04-11 05:58:40.991663 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-11 05:58:40.991674 | orchestrator | Saturday 11 April 2026 05:58:29 +0000 (0:00:00.912) 0:48:25.939 ******** 2026-04-11 05:58:40.991684 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991695 | orchestrator | 2026-04-11 05:58:40.991705 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-11 05:58:40.991716 | orchestrator | Saturday 11 April 2026 05:58:30 +0000 (0:00:00.807) 0:48:26.747 ******** 2026-04-11 05:58:40.991726 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991744 | orchestrator | 2026-04-11 05:58:40.991755 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-11 05:58:40.991765 | orchestrator | Saturday 11 April 2026 05:58:31 +0000 (0:00:00.761) 0:48:27.509 ******** 2026-04-11 05:58:40.991776 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-11 05:58:40.991787 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 05:58:40.991797 | orchestrator | 2026-04-11 05:58:40.991813 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-11 05:58:40.991824 | orchestrator | Saturday 11 April 2026 05:58:36 +0000 (0:00:04.816) 0:48:32.325 ******** 2026-04-11 05:58:40.991834 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991845 | orchestrator | 2026-04-11 05:58:40.991856 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-11 05:58:40.991866 | orchestrator | Saturday 11 April 2026 05:58:36 +0000 (0:00:00.793) 0:48:33.119 ******** 2026-04-11 05:58:40.991877 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991887 | orchestrator | 2026-04-11 05:58:40.991898 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-11 05:58:40.991908 | orchestrator | Saturday 11 April 2026 05:58:37 +0000 (0:00:00.818) 0:48:33.938 ******** 2026-04-11 05:58:40.991919 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991929 | orchestrator | 2026-04-11 05:58:40.991940 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-11 05:58:40.991951 | orchestrator | Saturday 11 April 2026 05:58:38 +0000 (0:00:00.796) 0:48:34.735 ******** 2026-04-11 05:58:40.991961 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.991972 | orchestrator | 2026-04-11 05:58:40.991982 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-11 05:58:40.991993 | orchestrator | Saturday 11 April 2026 05:58:39 +0000 (0:00:00.868) 0:48:35.603 ******** 2026-04-11 05:58:40.992004 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.992014 | orchestrator | 2026-04-11 05:58:40.992025 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-11 05:58:40.992035 | orchestrator | Saturday 11 April 2026 05:58:40 +0000 (0:00:00.807) 0:48:36.411 ******** 2026-04-11 05:58:40.992046 | orchestrator | skipping: [testbed-node-5] 2026-04-11 05:58:40.992057 | orchestrator | 2026-04-11 05:58:40.992067 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-11 05:58:40.992085 | orchestrator | Saturday 11 April 2026 05:58:40 +0000 (0:00:00.783) 0:48:37.194 ******** 2026-04-11 06:00:27.954852 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:00:27.954947 | orchestrator | 2026-04-11 06:00:27.954958 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-04-11 06:00:27.954967 | orchestrator | 2026-04-11 06:00:27.954974 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:00:27.954981 | orchestrator | Saturday 11 April 2026 05:58:42 +0000 (0:00:01.790) 0:48:38.985 ******** 2026-04-11 06:00:27.954988 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:00:27.954996 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:00:27.955003 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:00:27.955010 | orchestrator | 2026-04-11 06:00:27.955017 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:00:27.955024 | orchestrator | Saturday 11 April 2026 05:58:44 +0000 (0:00:01.668) 0:48:40.653 ******** 2026-04-11 06:00:27.955031 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:00:27.955037 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:00:27.955044 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:00:27.955051 | orchestrator | 2026-04-11 06:00:27.955058 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-04-11 06:00:27.955065 | orchestrator | Saturday 11 April 2026 05:58:45 +0000 (0:00:01.477) 0:48:42.131 ******** 2026-04-11 06:00:27.955071 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-11 06:00:27.955099 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-11 06:00:27.955107 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-11 06:00:27.955114 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-11 06:00:27.955121 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-11 06:00:27.955128 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-11 06:00:27.955135 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-11 06:00:27.955141 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-11 06:00:27.955148 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-11 06:00:27.955155 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-11 06:00:27.955161 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-11 06:00:27.955168 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-11 06:00:27.955175 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-11 06:00:27.955181 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-11 06:00:27.955188 | orchestrator | 2026-04-11 06:00:27.955195 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-04-11 06:00:27.955202 | orchestrator | Saturday 11 April 2026 05:59:59 +0000 (0:01:13.196) 0:49:55.327 ******** 2026-04-11 06:00:27.955208 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-11 06:00:27.955216 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-11 06:00:27.955223 | orchestrator | 2026-04-11 06:00:27.955229 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-04-11 06:00:27.955268 | orchestrator | Saturday 11 April 2026 06:00:04 +0000 (0:00:05.316) 0:50:00.644 ******** 2026-04-11 06:00:27.955275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:00:27.955282 | orchestrator | 2026-04-11 06:00:27.955289 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-04-11 06:00:27.955296 | orchestrator | 2026-04-11 06:00:27.955302 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 06:00:27.955309 | orchestrator | Saturday 11 April 2026 06:00:07 +0000 (0:00:03.209) 0:50:03.854 ******** 2026-04-11 06:00:27.955315 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-11 06:00:27.955322 | orchestrator | 2026-04-11 06:00:27.955329 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 06:00:27.955335 | orchestrator | Saturday 11 April 2026 06:00:08 +0000 (0:00:01.133) 0:50:04.988 ******** 2026-04-11 06:00:27.955342 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:27.955348 | orchestrator | 2026-04-11 06:00:27.955355 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 06:00:27.955362 | orchestrator | Saturday 11 April 2026 06:00:10 +0000 (0:00:01.461) 0:50:06.449 ******** 2026-04-11 06:00:27.955368 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:27.955375 | orchestrator | 2026-04-11 06:00:27.955381 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:00:27.955388 | orchestrator | Saturday 11 April 2026 06:00:11 +0000 (0:00:01.155) 0:50:07.605 ******** 2026-04-11 06:00:27.955395 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:27.955407 | orchestrator | 2026-04-11 06:00:27.955415 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:00:27.955423 | orchestrator | Saturday 11 April 2026 06:00:12 +0000 (0:00:01.413) 0:50:09.018 ******** 2026-04-11 06:00:27.955430 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:27.955438 | orchestrator | 2026-04-11 06:00:27.955446 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 06:00:27.955467 | orchestrator | Saturday 11 April 2026 06:00:13 +0000 (0:00:01.133) 0:50:10.152 ******** 2026-04-11 06:00:27.955475 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:27.955483 | orchestrator | 2026-04-11 06:00:27.955491 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 06:00:27.955499 | orchestrator | Saturday 11 April 2026 06:00:15 +0000 (0:00:01.134) 0:50:11.287 ******** 2026-04-11 06:00:27.955506 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:27.955514 | orchestrator | 2026-04-11 06:00:27.955524 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 06:00:27.955537 | orchestrator | Saturday 11 April 2026 06:00:16 +0000 (0:00:01.229) 0:50:12.517 ******** 2026-04-11 06:00:27.955549 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:27.955561 | orchestrator | 2026-04-11 06:00:27.955573 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 06:00:27.955586 | orchestrator | Saturday 11 April 2026 06:00:17 +0000 (0:00:01.169) 0:50:13.686 ******** 2026-04-11 06:00:27.955599 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:27.955611 | orchestrator | 2026-04-11 06:00:27.955619 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 06:00:27.955627 | orchestrator | Saturday 11 April 2026 06:00:18 +0000 (0:00:01.186) 0:50:14.872 ******** 2026-04-11 06:00:27.955635 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 06:00:27.955643 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:00:27.955650 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:00:27.955658 | orchestrator | 2026-04-11 06:00:27.955665 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 06:00:27.955673 | orchestrator | Saturday 11 April 2026 06:00:20 +0000 (0:00:01.734) 0:50:16.607 ******** 2026-04-11 06:00:27.955681 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:27.955689 | orchestrator | 2026-04-11 06:00:27.955696 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 06:00:27.955704 | orchestrator | Saturday 11 April 2026 06:00:21 +0000 (0:00:01.270) 0:50:17.878 ******** 2026-04-11 06:00:27.955712 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 06:00:27.955719 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:00:27.955727 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:00:27.955735 | orchestrator | 2026-04-11 06:00:27.955743 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 06:00:27.955751 | orchestrator | Saturday 11 April 2026 06:00:24 +0000 (0:00:02.979) 0:50:20.857 ******** 2026-04-11 06:00:27.955759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 06:00:27.955767 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 06:00:27.955773 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 06:00:27.955780 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:27.955787 | orchestrator | 2026-04-11 06:00:27.955793 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 06:00:27.955800 | orchestrator | Saturday 11 April 2026 06:00:26 +0000 (0:00:01.495) 0:50:22.353 ******** 2026-04-11 06:00:27.955809 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 06:00:27.955823 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 06:00:27.955835 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 06:00:27.955842 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:27.955849 | orchestrator | 2026-04-11 06:00:27.955855 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 06:00:27.955862 | orchestrator | Saturday 11 April 2026 06:00:27 +0000 (0:00:01.683) 0:50:24.037 ******** 2026-04-11 06:00:27.955871 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:27.955880 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:27.955893 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:48.188729 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.188847 | orchestrator | 2026-04-11 06:00:48.188865 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 06:00:48.188878 | orchestrator | Saturday 11 April 2026 06:00:29 +0000 (0:00:01.291) 0:50:25.328 ******** 2026-04-11 06:00:48.188893 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 06:00:22.230132', 'end': '2026-04-11 06:00:22.281704', 'delta': '0:00:00.051572', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 06:00:48.188909 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 06:00:22.823579', 'end': '2026-04-11 06:00:22.870048', 'delta': '0:00:00.046469', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 06:00:48.188961 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 06:00:23.414075', 'end': '2026-04-11 06:00:23.454203', 'delta': '0:00:00.040128', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 06:00:48.188974 | orchestrator | 2026-04-11 06:00:48.188986 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 06:00:48.188997 | orchestrator | Saturday 11 April 2026 06:00:30 +0000 (0:00:01.235) 0:50:26.564 ******** 2026-04-11 06:00:48.189007 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:48.189019 | orchestrator | 2026-04-11 06:00:48.189030 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 06:00:48.189041 | orchestrator | Saturday 11 April 2026 06:00:31 +0000 (0:00:01.275) 0:50:27.839 ******** 2026-04-11 06:00:48.189052 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189063 | orchestrator | 2026-04-11 06:00:48.189073 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 06:00:48.189084 | orchestrator | Saturday 11 April 2026 06:00:32 +0000 (0:00:01.301) 0:50:29.140 ******** 2026-04-11 06:00:48.189095 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:48.189105 | orchestrator | 2026-04-11 06:00:48.189116 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 06:00:48.189127 | orchestrator | Saturday 11 April 2026 06:00:34 +0000 (0:00:01.211) 0:50:30.352 ******** 2026-04-11 06:00:48.189138 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:48.189148 | orchestrator | 2026-04-11 06:00:48.189159 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:00:48.189170 | orchestrator | Saturday 11 April 2026 06:00:36 +0000 (0:00:01.986) 0:50:32.339 ******** 2026-04-11 06:00:48.189181 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:00:48.189191 | orchestrator | 2026-04-11 06:00:48.189202 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 06:00:48.189213 | orchestrator | Saturday 11 April 2026 06:00:37 +0000 (0:00:01.195) 0:50:33.535 ******** 2026-04-11 06:00:48.189223 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189234 | orchestrator | 2026-04-11 06:00:48.189245 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 06:00:48.189294 | orchestrator | Saturday 11 April 2026 06:00:38 +0000 (0:00:01.302) 0:50:34.837 ******** 2026-04-11 06:00:48.189314 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189334 | orchestrator | 2026-04-11 06:00:48.189353 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:00:48.189374 | orchestrator | Saturday 11 April 2026 06:00:39 +0000 (0:00:01.221) 0:50:36.059 ******** 2026-04-11 06:00:48.189389 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189401 | orchestrator | 2026-04-11 06:00:48.189432 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 06:00:48.189446 | orchestrator | Saturday 11 April 2026 06:00:40 +0000 (0:00:01.117) 0:50:37.176 ******** 2026-04-11 06:00:48.189459 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189472 | orchestrator | 2026-04-11 06:00:48.189485 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 06:00:48.189498 | orchestrator | Saturday 11 April 2026 06:00:42 +0000 (0:00:01.164) 0:50:38.341 ******** 2026-04-11 06:00:48.189510 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189522 | orchestrator | 2026-04-11 06:00:48.189535 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 06:00:48.189558 | orchestrator | Saturday 11 April 2026 06:00:43 +0000 (0:00:01.216) 0:50:39.558 ******** 2026-04-11 06:00:48.189571 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189583 | orchestrator | 2026-04-11 06:00:48.189596 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 06:00:48.189610 | orchestrator | Saturday 11 April 2026 06:00:44 +0000 (0:00:01.142) 0:50:40.700 ******** 2026-04-11 06:00:48.189623 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189633 | orchestrator | 2026-04-11 06:00:48.189644 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 06:00:48.189655 | orchestrator | Saturday 11 April 2026 06:00:45 +0000 (0:00:01.199) 0:50:41.900 ******** 2026-04-11 06:00:48.189666 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189676 | orchestrator | 2026-04-11 06:00:48.189687 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 06:00:48.189699 | orchestrator | Saturday 11 April 2026 06:00:46 +0000 (0:00:01.145) 0:50:43.046 ******** 2026-04-11 06:00:48.189710 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:48.189721 | orchestrator | 2026-04-11 06:00:48.189731 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 06:00:48.189742 | orchestrator | Saturday 11 April 2026 06:00:47 +0000 (0:00:01.151) 0:50:44.198 ******** 2026-04-11 06:00:48.189754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:00:48.189768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:00:48.189786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:00:48.189799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 06:00:48.189812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:00:48.189823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:00:48.189849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:00:49.502467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4dd7cb49', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:00:49.502573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:00:49.502591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:00:49.502605 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:00:49.502619 | orchestrator | 2026-04-11 06:00:49.502632 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 06:00:49.502645 | orchestrator | Saturday 11 April 2026 06:00:49 +0000 (0:00:01.347) 0:50:45.546 ******** 2026-04-11 06:00:49.502659 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:49.502714 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:49.502729 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:49.502742 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:49.502761 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:49.502774 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:49.502786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:00:49.502817 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4dd7cb49', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_4dd7cb49-c2ed-4736-af78-304fedd57f5a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:01:37.660754 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:01:37.660876 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:01:37.660917 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:01:37.660931 | orchestrator | 2026-04-11 06:01:37.660944 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 06:01:37.660956 | orchestrator | Saturday 11 April 2026 06:00:50 +0000 (0:00:01.350) 0:50:46.896 ******** 2026-04-11 06:01:37.660967 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:01:37.660979 | orchestrator | 2026-04-11 06:01:37.660991 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 06:01:37.661001 | orchestrator | Saturday 11 April 2026 06:00:52 +0000 (0:00:01.562) 0:50:48.458 ******** 2026-04-11 06:01:37.661012 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:01:37.661023 | orchestrator | 2026-04-11 06:01:37.661034 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:01:37.661044 | orchestrator | Saturday 11 April 2026 06:00:53 +0000 (0:00:01.159) 0:50:49.618 ******** 2026-04-11 06:01:37.661055 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:01:37.661066 | orchestrator | 2026-04-11 06:01:37.661077 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:01:37.661087 | orchestrator | Saturday 11 April 2026 06:00:54 +0000 (0:00:01.526) 0:50:51.145 ******** 2026-04-11 06:01:37.661098 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:01:37.661109 | orchestrator | 2026-04-11 06:01:37.661120 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:01:37.661134 | orchestrator | Saturday 11 April 2026 06:00:56 +0000 (0:00:01.284) 0:50:52.429 ******** 2026-04-11 06:01:37.661145 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:01:37.661156 | orchestrator | 2026-04-11 06:01:37.661166 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:01:37.661177 | orchestrator | Saturday 11 April 2026 06:00:57 +0000 (0:00:01.237) 0:50:53.666 ******** 2026-04-11 06:01:37.661188 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:01:37.661198 | orchestrator | 2026-04-11 06:01:37.661209 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 06:01:37.661219 | orchestrator | Saturday 11 April 2026 06:00:58 +0000 (0:00:01.188) 0:50:54.855 ******** 2026-04-11 06:01:37.661230 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 06:01:37.661241 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 06:01:37.661318 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 06:01:37.661333 | orchestrator | 2026-04-11 06:01:37.661347 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 06:01:37.661359 | orchestrator | Saturday 11 April 2026 06:01:00 +0000 (0:00:01.762) 0:50:56.618 ******** 2026-04-11 06:01:37.661373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-11 06:01:37.661386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-11 06:01:37.661399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-11 06:01:37.661412 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:01:37.661424 | orchestrator | 2026-04-11 06:01:37.661437 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 06:01:37.661450 | orchestrator | Saturday 11 April 2026 06:01:01 +0000 (0:00:01.230) 0:50:57.848 ******** 2026-04-11 06:01:37.661463 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:01:37.661475 | orchestrator | 2026-04-11 06:01:37.661488 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 06:01:37.661501 | orchestrator | Saturday 11 April 2026 06:01:02 +0000 (0:00:01.189) 0:50:59.038 ******** 2026-04-11 06:01:37.661514 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 06:01:37.661527 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:01:37.661541 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:01:37.661553 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 06:01:37.661573 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:01:37.661586 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:01:37.661617 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:01:37.661628 | orchestrator | 2026-04-11 06:01:37.661640 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 06:01:37.661650 | orchestrator | Saturday 11 April 2026 06:01:05 +0000 (0:00:02.265) 0:51:01.303 ******** 2026-04-11 06:01:37.661668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 06:01:37.661680 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:01:37.661690 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:01:37.661701 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 06:01:37.661712 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:01:37.661723 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:01:37.661733 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:01:37.661744 | orchestrator | 2026-04-11 06:01:37.661755 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-04-11 06:01:37.661766 | orchestrator | Saturday 11 April 2026 06:01:07 +0000 (0:00:02.766) 0:51:04.070 ******** 2026-04-11 06:01:37.661776 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:01:37.661787 | orchestrator | 2026-04-11 06:01:37.661798 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-04-11 06:01:37.661808 | orchestrator | Saturday 11 April 2026 06:01:11 +0000 (0:00:03.190) 0:51:07.261 ******** 2026-04-11 06:01:37.661819 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:01:37.661830 | orchestrator | 2026-04-11 06:01:37.661841 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-04-11 06:01:37.661852 | orchestrator | Saturday 11 April 2026 06:01:14 +0000 (0:00:03.020) 0:51:10.281 ******** 2026-04-11 06:01:37.661862 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:01:37.661873 | orchestrator | 2026-04-11 06:01:37.661884 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-04-11 06:01:37.661895 | orchestrator | Saturday 11 April 2026 06:01:16 +0000 (0:00:02.098) 0:51:12.380 ******** 2026-04-11 06:01:37.661907 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4669', 'value': {'gid': 4669, 'name': 'testbed-node-3', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.13:6817/1898805517', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.13:6816', 'nonce': 1898805517}, {'type': 'v1', 'addr': '192.168.16.13:6817', 'nonce': 1898805517}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-04-11 06:01:37.661921 | orchestrator | 2026-04-11 06:01:37.661932 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-04-11 06:01:37.661943 | orchestrator | Saturday 11 April 2026 06:01:17 +0000 (0:00:01.175) 0:51:13.555 ******** 2026-04-11 06:01:37.661953 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-3) 2026-04-11 06:01:37.661964 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-11 06:01:37.661975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-11 06:01:37.661992 | orchestrator | 2026-04-11 06:01:37.662003 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-04-11 06:01:37.662076 | orchestrator | Saturday 11 April 2026 06:01:19 +0000 (0:00:02.006) 0:51:15.562 ******** 2026-04-11 06:01:37.662092 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-04-11 06:01:37.662103 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-04-11 06:01:37.662114 | orchestrator | 2026-04-11 06:01:37.662124 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-04-11 06:01:37.662135 | orchestrator | Saturday 11 April 2026 06:01:20 +0000 (0:00:01.511) 0:51:17.074 ******** 2026-04-11 06:01:37.662146 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:01:37.662157 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:01:37.662167 | orchestrator | 2026-04-11 06:01:37.662178 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-04-11 06:01:37.662189 | orchestrator | Saturday 11 April 2026 06:01:31 +0000 (0:00:10.806) 0:51:27.880 ******** 2026-04-11 06:01:37.662200 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:01:37.662210 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:01:37.662221 | orchestrator | 2026-04-11 06:01:37.662232 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-04-11 06:01:37.662243 | orchestrator | Saturday 11 April 2026 06:01:35 +0000 (0:00:03.848) 0:51:31.728 ******** 2026-04-11 06:01:37.662276 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:01:37.662287 | orchestrator | 2026-04-11 06:01:37.662298 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-04-11 06:01:37.662317 | orchestrator | Saturday 11 April 2026 06:01:37 +0000 (0:00:02.135) 0:51:33.864 ******** 2026-04-11 06:02:01.302828 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:02:01.302943 | orchestrator | 2026-04-11 06:02:01.302960 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-04-11 06:02:01.302972 | orchestrator | 2026-04-11 06:02:01.302984 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 06:02:01.303010 | orchestrator | Saturday 11 April 2026 06:01:39 +0000 (0:00:01.592) 0:51:35.456 ******** 2026-04-11 06:02:01.303021 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-11 06:02:01.303032 | orchestrator | 2026-04-11 06:02:01.303043 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 06:02:01.303055 | orchestrator | Saturday 11 April 2026 06:01:40 +0000 (0:00:01.213) 0:51:36.670 ******** 2026-04-11 06:02:01.303066 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:01.303078 | orchestrator | 2026-04-11 06:02:01.303089 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 06:02:01.303100 | orchestrator | Saturday 11 April 2026 06:01:41 +0000 (0:00:01.439) 0:51:38.109 ******** 2026-04-11 06:02:01.303110 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:01.303121 | orchestrator | 2026-04-11 06:02:01.303132 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:02:01.303143 | orchestrator | Saturday 11 April 2026 06:01:43 +0000 (0:00:01.126) 0:51:39.236 ******** 2026-04-11 06:02:01.303154 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:01.303165 | orchestrator | 2026-04-11 06:02:01.303176 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:02:01.303186 | orchestrator | Saturday 11 April 2026 06:01:44 +0000 (0:00:01.441) 0:51:40.678 ******** 2026-04-11 06:02:01.303197 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:01.303208 | orchestrator | 2026-04-11 06:02:01.303219 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 06:02:01.303230 | orchestrator | Saturday 11 April 2026 06:01:45 +0000 (0:00:01.243) 0:51:41.921 ******** 2026-04-11 06:02:01.303240 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:01.303321 | orchestrator | 2026-04-11 06:02:01.303335 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 06:02:01.303346 | orchestrator | Saturday 11 April 2026 06:01:46 +0000 (0:00:01.184) 0:51:43.105 ******** 2026-04-11 06:02:01.303356 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:01.303369 | orchestrator | 2026-04-11 06:02:01.303382 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 06:02:01.303396 | orchestrator | Saturday 11 April 2026 06:01:48 +0000 (0:00:01.161) 0:51:44.267 ******** 2026-04-11 06:02:01.303410 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:01.303423 | orchestrator | 2026-04-11 06:02:01.303435 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 06:02:01.303449 | orchestrator | Saturday 11 April 2026 06:01:49 +0000 (0:00:01.174) 0:51:45.442 ******** 2026-04-11 06:02:01.303462 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:01.303474 | orchestrator | 2026-04-11 06:02:01.303487 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 06:02:01.303500 | orchestrator | Saturday 11 April 2026 06:01:50 +0000 (0:00:01.157) 0:51:46.600 ******** 2026-04-11 06:02:01.303513 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:02:01.303525 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:02:01.303538 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:02:01.303551 | orchestrator | 2026-04-11 06:02:01.303564 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 06:02:01.303577 | orchestrator | Saturday 11 April 2026 06:01:52 +0000 (0:00:01.704) 0:51:48.304 ******** 2026-04-11 06:02:01.303589 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:01.303602 | orchestrator | 2026-04-11 06:02:01.303615 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 06:02:01.303627 | orchestrator | Saturday 11 April 2026 06:01:53 +0000 (0:00:01.305) 0:51:49.609 ******** 2026-04-11 06:02:01.303638 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:02:01.303649 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:02:01.303659 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:02:01.303670 | orchestrator | 2026-04-11 06:02:01.303681 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 06:02:01.303691 | orchestrator | Saturday 11 April 2026 06:01:56 +0000 (0:00:03.042) 0:51:52.652 ******** 2026-04-11 06:02:01.303702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 06:02:01.303714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 06:02:01.303724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 06:02:01.303735 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:01.303746 | orchestrator | 2026-04-11 06:02:01.303757 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 06:02:01.303768 | orchestrator | Saturday 11 April 2026 06:01:57 +0000 (0:00:01.455) 0:51:54.108 ******** 2026-04-11 06:02:01.303781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 06:02:01.303794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 06:02:01.303824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 06:02:01.303850 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:01.303861 | orchestrator | 2026-04-11 06:02:01.303872 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 06:02:01.303883 | orchestrator | Saturday 11 April 2026 06:01:59 +0000 (0:00:02.051) 0:51:56.160 ******** 2026-04-11 06:02:01.303897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:01.303910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:01.303922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:01.303933 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:01.303944 | orchestrator | 2026-04-11 06:02:01.303955 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 06:02:01.303966 | orchestrator | Saturday 11 April 2026 06:02:01 +0000 (0:00:01.222) 0:51:57.383 ******** 2026-04-11 06:02:01.303979 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 06:01:53.928374', 'end': '2026-04-11 06:01:53.982049', 'delta': '0:00:00.053675', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 06:02:01.303993 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 06:01:54.551149', 'end': '2026-04-11 06:01:54.604138', 'delta': '0:00:00.052989', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 06:02:01.304012 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 06:01:55.131464', 'end': '2026-04-11 06:01:55.175983', 'delta': '0:00:00.044519', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 06:02:19.915544 | orchestrator | 2026-04-11 06:02:19.915672 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 06:02:19.915692 | orchestrator | Saturday 11 April 2026 06:02:02 +0000 (0:00:01.227) 0:51:58.611 ******** 2026-04-11 06:02:19.915720 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:19.915734 | orchestrator | 2026-04-11 06:02:19.915746 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 06:02:19.915757 | orchestrator | Saturday 11 April 2026 06:02:03 +0000 (0:00:01.226) 0:51:59.837 ******** 2026-04-11 06:02:19.915769 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:19.915781 | orchestrator | 2026-04-11 06:02:19.915792 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 06:02:19.915804 | orchestrator | Saturday 11 April 2026 06:02:04 +0000 (0:00:01.293) 0:52:01.131 ******** 2026-04-11 06:02:19.915816 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:19.915828 | orchestrator | 2026-04-11 06:02:19.915839 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 06:02:19.915851 | orchestrator | Saturday 11 April 2026 06:02:06 +0000 (0:00:01.268) 0:52:02.399 ******** 2026-04-11 06:02:19.915863 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:02:19.915874 | orchestrator | 2026-04-11 06:02:19.915886 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:02:19.915897 | orchestrator | Saturday 11 April 2026 06:02:08 +0000 (0:00:01.922) 0:52:04.321 ******** 2026-04-11 06:02:19.915908 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:19.915920 | orchestrator | 2026-04-11 06:02:19.915931 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 06:02:19.915943 | orchestrator | Saturday 11 April 2026 06:02:09 +0000 (0:00:01.132) 0:52:05.454 ******** 2026-04-11 06:02:19.915954 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:19.915965 | orchestrator | 2026-04-11 06:02:19.915977 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 06:02:19.915988 | orchestrator | Saturday 11 April 2026 06:02:10 +0000 (0:00:01.166) 0:52:06.621 ******** 2026-04-11 06:02:19.915999 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:19.916011 | orchestrator | 2026-04-11 06:02:19.916022 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:02:19.916033 | orchestrator | Saturday 11 April 2026 06:02:11 +0000 (0:00:01.227) 0:52:07.848 ******** 2026-04-11 06:02:19.916046 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:19.916060 | orchestrator | 2026-04-11 06:02:19.916073 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 06:02:19.916086 | orchestrator | Saturday 11 April 2026 06:02:12 +0000 (0:00:01.190) 0:52:09.038 ******** 2026-04-11 06:02:19.916100 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:19.916113 | orchestrator | 2026-04-11 06:02:19.916126 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 06:02:19.916140 | orchestrator | Saturday 11 April 2026 06:02:13 +0000 (0:00:01.125) 0:52:10.165 ******** 2026-04-11 06:02:19.916154 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:19.916167 | orchestrator | 2026-04-11 06:02:19.916178 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 06:02:19.916190 | orchestrator | Saturday 11 April 2026 06:02:15 +0000 (0:00:01.165) 0:52:11.330 ******** 2026-04-11 06:02:19.916201 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:19.916212 | orchestrator | 2026-04-11 06:02:19.916224 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 06:02:19.916235 | orchestrator | Saturday 11 April 2026 06:02:16 +0000 (0:00:01.153) 0:52:12.483 ******** 2026-04-11 06:02:19.916247 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:19.916304 | orchestrator | 2026-04-11 06:02:19.916316 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 06:02:19.916327 | orchestrator | Saturday 11 April 2026 06:02:17 +0000 (0:00:01.150) 0:52:13.634 ******** 2026-04-11 06:02:19.916338 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:19.916349 | orchestrator | 2026-04-11 06:02:19.916360 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 06:02:19.916372 | orchestrator | Saturday 11 April 2026 06:02:18 +0000 (0:00:01.134) 0:52:14.769 ******** 2026-04-11 06:02:19.916383 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:19.916394 | orchestrator | 2026-04-11 06:02:19.916404 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 06:02:19.916415 | orchestrator | Saturday 11 April 2026 06:02:19 +0000 (0:00:01.155) 0:52:15.925 ******** 2026-04-11 06:02:19.916428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:02:19.916461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'uuids': ['5687e399-36a2-4cfe-ae2f-5c9610714106'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG']}})  2026-04-11 06:02:19.916482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d9c4f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:02:19.916496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003']}})  2026-04-11 06:02:19.916509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:02:19.916521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:02:19.916542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 06:02:19.916554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:02:19.916566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ', 'dm-uuid-CRYPT-LUKS2-4ce930e6d90647c5bf5f978d8b977bd0-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:02:19.916590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:02:21.261847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'uuids': ['4ce930e6-d906-47c5-bf5f-978d8b977bd0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ']}})  2026-04-11 06:02:21.261972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200']}})  2026-04-11 06:02:21.262011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:02:21.262094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f54fce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:02:21.262127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:02:21.262139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:02:21.262151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG', 'dm-uuid-CRYPT-LUKS2-5687e39936a24cfeae2f5c9610714106-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:02:21.262172 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:21.262184 | orchestrator | 2026-04-11 06:02:21.262195 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 06:02:21.262206 | orchestrator | Saturday 11 April 2026 06:02:21 +0000 (0:00:01.407) 0:52:17.333 ******** 2026-04-11 06:02:21.262217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.262230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'uuids': ['5687e399-36a2-4cfe-ae2f-5c9610714106'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.262241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d9c4f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.262287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388159 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388373 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ', 'dm-uuid-CRYPT-LUKS2-4ce930e6d90647c5bf5f978d8b977bd0-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388412 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'uuids': ['4ce930e6-d906-47c5-bf5f-978d8b977bd0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:21.388514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f54fce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:50.285026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:50.285153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:50.285174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG', 'dm-uuid-CRYPT-LUKS2-5687e39936a24cfeae2f5c9610714106-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:02:50.285188 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.285203 | orchestrator | 2026-04-11 06:02:50.285215 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 06:02:50.285227 | orchestrator | Saturday 11 April 2026 06:02:22 +0000 (0:00:01.469) 0:52:18.802 ******** 2026-04-11 06:02:50.285239 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:50.285251 | orchestrator | 2026-04-11 06:02:50.285295 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 06:02:50.285307 | orchestrator | Saturday 11 April 2026 06:02:24 +0000 (0:00:01.515) 0:52:20.318 ******** 2026-04-11 06:02:50.285318 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:50.285329 | orchestrator | 2026-04-11 06:02:50.285340 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:02:50.285352 | orchestrator | Saturday 11 April 2026 06:02:25 +0000 (0:00:01.118) 0:52:21.436 ******** 2026-04-11 06:02:50.285363 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:50.285374 | orchestrator | 2026-04-11 06:02:50.285402 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:02:50.285414 | orchestrator | Saturday 11 April 2026 06:02:26 +0000 (0:00:01.507) 0:52:22.943 ******** 2026-04-11 06:02:50.285426 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.285437 | orchestrator | 2026-04-11 06:02:50.285448 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:02:50.285482 | orchestrator | Saturday 11 April 2026 06:02:27 +0000 (0:00:01.106) 0:52:24.049 ******** 2026-04-11 06:02:50.285494 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.285504 | orchestrator | 2026-04-11 06:02:50.285515 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:02:50.285526 | orchestrator | Saturday 11 April 2026 06:02:29 +0000 (0:00:01.348) 0:52:25.397 ******** 2026-04-11 06:02:50.285537 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.285548 | orchestrator | 2026-04-11 06:02:50.285559 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 06:02:50.285570 | orchestrator | Saturday 11 April 2026 06:02:30 +0000 (0:00:01.122) 0:52:26.520 ******** 2026-04-11 06:02:50.285581 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-11 06:02:50.285592 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-11 06:02:50.285603 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-11 06:02:50.285614 | orchestrator | 2026-04-11 06:02:50.285624 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 06:02:50.285635 | orchestrator | Saturday 11 April 2026 06:02:32 +0000 (0:00:01.735) 0:52:28.256 ******** 2026-04-11 06:02:50.285646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 06:02:50.285657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 06:02:50.285668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 06:02:50.285679 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.285690 | orchestrator | 2026-04-11 06:02:50.285701 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 06:02:50.285712 | orchestrator | Saturday 11 April 2026 06:02:33 +0000 (0:00:01.149) 0:52:29.406 ******** 2026-04-11 06:02:50.285742 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-11 06:02:50.285755 | orchestrator | 2026-04-11 06:02:50.285767 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:02:50.285779 | orchestrator | Saturday 11 April 2026 06:02:34 +0000 (0:00:01.138) 0:52:30.544 ******** 2026-04-11 06:02:50.285790 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.285801 | orchestrator | 2026-04-11 06:02:50.285812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:02:50.285822 | orchestrator | Saturday 11 April 2026 06:02:35 +0000 (0:00:01.138) 0:52:31.682 ******** 2026-04-11 06:02:50.285833 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.285844 | orchestrator | 2026-04-11 06:02:50.285855 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:02:50.285866 | orchestrator | Saturday 11 April 2026 06:02:36 +0000 (0:00:01.136) 0:52:32.819 ******** 2026-04-11 06:02:50.285880 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.285917 | orchestrator | 2026-04-11 06:02:50.285951 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:02:50.285963 | orchestrator | Saturday 11 April 2026 06:02:37 +0000 (0:00:01.207) 0:52:34.027 ******** 2026-04-11 06:02:50.285974 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:50.285985 | orchestrator | 2026-04-11 06:02:50.285996 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:02:50.286007 | orchestrator | Saturday 11 April 2026 06:02:39 +0000 (0:00:01.326) 0:52:35.353 ******** 2026-04-11 06:02:50.286083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:02:50.286096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:02:50.286106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:02:50.286117 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.286128 | orchestrator | 2026-04-11 06:02:50.286139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:02:50.286149 | orchestrator | Saturday 11 April 2026 06:02:40 +0000 (0:00:01.428) 0:52:36.781 ******** 2026-04-11 06:02:50.286170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:02:50.286217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:02:50.286229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:02:50.286241 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.286252 | orchestrator | 2026-04-11 06:02:50.286444 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:02:50.286463 | orchestrator | Saturday 11 April 2026 06:02:42 +0000 (0:00:01.448) 0:52:38.229 ******** 2026-04-11 06:02:50.286474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:02:50.286485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:02:50.286496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:02:50.286507 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:02:50.286518 | orchestrator | 2026-04-11 06:02:50.286529 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:02:50.286540 | orchestrator | Saturday 11 April 2026 06:02:43 +0000 (0:00:01.492) 0:52:39.722 ******** 2026-04-11 06:02:50.286551 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:02:50.286562 | orchestrator | 2026-04-11 06:02:50.286573 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:02:50.286584 | orchestrator | Saturday 11 April 2026 06:02:44 +0000 (0:00:01.241) 0:52:40.963 ******** 2026-04-11 06:02:50.286595 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 06:02:50.286605 | orchestrator | 2026-04-11 06:02:50.286627 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 06:02:50.286638 | orchestrator | Saturday 11 April 2026 06:02:46 +0000 (0:00:01.392) 0:52:42.356 ******** 2026-04-11 06:02:50.286649 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:02:50.286660 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:02:50.286671 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:02:50.286681 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 06:02:50.286693 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:02:50.286703 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:02:50.286714 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:02:50.286725 | orchestrator | 2026-04-11 06:02:50.286736 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 06:02:50.286747 | orchestrator | Saturday 11 April 2026 06:02:48 +0000 (0:00:02.250) 0:52:44.607 ******** 2026-04-11 06:02:50.286758 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:02:50.286769 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:02:50.286779 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:02:50.286790 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 06:02:50.286801 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:02:50.286812 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:02:50.286823 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:02:50.286834 | orchestrator | 2026-04-11 06:02:50.286861 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-04-11 06:03:40.752849 | orchestrator | Saturday 11 April 2026 06:02:51 +0000 (0:00:02.833) 0:52:47.440 ******** 2026-04-11 06:03:40.752970 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.752987 | orchestrator | 2026-04-11 06:03:40.753022 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 06:03:40.753034 | orchestrator | Saturday 11 April 2026 06:02:52 +0000 (0:00:01.134) 0:52:48.575 ******** 2026-04-11 06:03:40.753045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-11 06:03:40.753057 | orchestrator | 2026-04-11 06:03:40.753068 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 06:03:40.753079 | orchestrator | Saturday 11 April 2026 06:02:53 +0000 (0:00:01.117) 0:52:49.692 ******** 2026-04-11 06:03:40.753090 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-11 06:03:40.753100 | orchestrator | 2026-04-11 06:03:40.753111 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 06:03:40.753122 | orchestrator | Saturday 11 April 2026 06:02:54 +0000 (0:00:01.273) 0:52:50.966 ******** 2026-04-11 06:03:40.753133 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753143 | orchestrator | 2026-04-11 06:03:40.753154 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 06:03:40.753165 | orchestrator | Saturday 11 April 2026 06:02:55 +0000 (0:00:01.162) 0:52:52.129 ******** 2026-04-11 06:03:40.753176 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753187 | orchestrator | 2026-04-11 06:03:40.753198 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 06:03:40.753208 | orchestrator | Saturday 11 April 2026 06:02:57 +0000 (0:00:01.507) 0:52:53.637 ******** 2026-04-11 06:03:40.753219 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753230 | orchestrator | 2026-04-11 06:03:40.753240 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 06:03:40.753251 | orchestrator | Saturday 11 April 2026 06:02:59 +0000 (0:00:01.604) 0:52:55.242 ******** 2026-04-11 06:03:40.753325 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753339 | orchestrator | 2026-04-11 06:03:40.753350 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 06:03:40.753360 | orchestrator | Saturday 11 April 2026 06:03:00 +0000 (0:00:01.562) 0:52:56.805 ******** 2026-04-11 06:03:40.753371 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753384 | orchestrator | 2026-04-11 06:03:40.753397 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 06:03:40.753409 | orchestrator | Saturday 11 April 2026 06:03:01 +0000 (0:00:01.117) 0:52:57.922 ******** 2026-04-11 06:03:40.753422 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753435 | orchestrator | 2026-04-11 06:03:40.753448 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 06:03:40.753461 | orchestrator | Saturday 11 April 2026 06:03:02 +0000 (0:00:01.102) 0:52:59.025 ******** 2026-04-11 06:03:40.753473 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753483 | orchestrator | 2026-04-11 06:03:40.753494 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 06:03:40.753504 | orchestrator | Saturday 11 April 2026 06:03:03 +0000 (0:00:01.123) 0:53:00.148 ******** 2026-04-11 06:03:40.753515 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753526 | orchestrator | 2026-04-11 06:03:40.753537 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 06:03:40.753547 | orchestrator | Saturday 11 April 2026 06:03:05 +0000 (0:00:01.520) 0:53:01.669 ******** 2026-04-11 06:03:40.753558 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753569 | orchestrator | 2026-04-11 06:03:40.753579 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 06:03:40.753605 | orchestrator | Saturday 11 April 2026 06:03:06 +0000 (0:00:01.517) 0:53:03.187 ******** 2026-04-11 06:03:40.753616 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753627 | orchestrator | 2026-04-11 06:03:40.753638 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 06:03:40.753648 | orchestrator | Saturday 11 April 2026 06:03:08 +0000 (0:00:01.140) 0:53:04.327 ******** 2026-04-11 06:03:40.753667 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753678 | orchestrator | 2026-04-11 06:03:40.753689 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 06:03:40.753700 | orchestrator | Saturday 11 April 2026 06:03:09 +0000 (0:00:01.151) 0:53:05.479 ******** 2026-04-11 06:03:40.753711 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753722 | orchestrator | 2026-04-11 06:03:40.753732 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 06:03:40.753743 | orchestrator | Saturday 11 April 2026 06:03:10 +0000 (0:00:01.156) 0:53:06.636 ******** 2026-04-11 06:03:40.753754 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753765 | orchestrator | 2026-04-11 06:03:40.753776 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 06:03:40.753786 | orchestrator | Saturday 11 April 2026 06:03:11 +0000 (0:00:01.183) 0:53:07.819 ******** 2026-04-11 06:03:40.753797 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753808 | orchestrator | 2026-04-11 06:03:40.753818 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 06:03:40.753829 | orchestrator | Saturday 11 April 2026 06:03:12 +0000 (0:00:01.164) 0:53:08.983 ******** 2026-04-11 06:03:40.753840 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753850 | orchestrator | 2026-04-11 06:03:40.753861 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 06:03:40.753871 | orchestrator | Saturday 11 April 2026 06:03:13 +0000 (0:00:01.179) 0:53:10.163 ******** 2026-04-11 06:03:40.753882 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753893 | orchestrator | 2026-04-11 06:03:40.753904 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 06:03:40.753914 | orchestrator | Saturday 11 April 2026 06:03:15 +0000 (0:00:01.181) 0:53:11.344 ******** 2026-04-11 06:03:40.753925 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.753936 | orchestrator | 2026-04-11 06:03:40.753965 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 06:03:40.753977 | orchestrator | Saturday 11 April 2026 06:03:16 +0000 (0:00:01.130) 0:53:12.475 ******** 2026-04-11 06:03:40.753988 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.753999 | orchestrator | 2026-04-11 06:03:40.754010 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 06:03:40.754081 | orchestrator | Saturday 11 April 2026 06:03:17 +0000 (0:00:01.129) 0:53:13.604 ******** 2026-04-11 06:03:40.754093 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.754104 | orchestrator | 2026-04-11 06:03:40.754114 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 06:03:40.754125 | orchestrator | Saturday 11 April 2026 06:03:18 +0000 (0:00:01.164) 0:53:14.768 ******** 2026-04-11 06:03:40.754136 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754147 | orchestrator | 2026-04-11 06:03:40.754158 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 06:03:40.754168 | orchestrator | Saturday 11 April 2026 06:03:19 +0000 (0:00:01.138) 0:53:15.907 ******** 2026-04-11 06:03:40.754179 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754189 | orchestrator | 2026-04-11 06:03:40.754200 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 06:03:40.754211 | orchestrator | Saturday 11 April 2026 06:03:20 +0000 (0:00:01.179) 0:53:17.087 ******** 2026-04-11 06:03:40.754221 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754232 | orchestrator | 2026-04-11 06:03:40.754243 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 06:03:40.754253 | orchestrator | Saturday 11 April 2026 06:03:22 +0000 (0:00:01.127) 0:53:18.215 ******** 2026-04-11 06:03:40.754292 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754311 | orchestrator | 2026-04-11 06:03:40.754330 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 06:03:40.754350 | orchestrator | Saturday 11 April 2026 06:03:23 +0000 (0:00:01.214) 0:53:19.430 ******** 2026-04-11 06:03:40.754379 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754391 | orchestrator | 2026-04-11 06:03:40.754402 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 06:03:40.754412 | orchestrator | Saturday 11 April 2026 06:03:24 +0000 (0:00:01.176) 0:53:20.606 ******** 2026-04-11 06:03:40.754423 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754442 | orchestrator | 2026-04-11 06:03:40.754469 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 06:03:40.754489 | orchestrator | Saturday 11 April 2026 06:03:25 +0000 (0:00:01.211) 0:53:21.818 ******** 2026-04-11 06:03:40.754507 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754525 | orchestrator | 2026-04-11 06:03:40.754542 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 06:03:40.754558 | orchestrator | Saturday 11 April 2026 06:03:26 +0000 (0:00:01.128) 0:53:22.946 ******** 2026-04-11 06:03:40.754575 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754594 | orchestrator | 2026-04-11 06:03:40.754612 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 06:03:40.754631 | orchestrator | Saturday 11 April 2026 06:03:27 +0000 (0:00:01.141) 0:53:24.088 ******** 2026-04-11 06:03:40.754703 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754725 | orchestrator | 2026-04-11 06:03:40.754747 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 06:03:40.754770 | orchestrator | Saturday 11 April 2026 06:03:29 +0000 (0:00:01.132) 0:53:25.221 ******** 2026-04-11 06:03:40.754782 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754792 | orchestrator | 2026-04-11 06:03:40.754803 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 06:03:40.754814 | orchestrator | Saturday 11 April 2026 06:03:30 +0000 (0:00:01.185) 0:53:26.407 ******** 2026-04-11 06:03:40.754825 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754835 | orchestrator | 2026-04-11 06:03:40.754855 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 06:03:40.754866 | orchestrator | Saturday 11 April 2026 06:03:31 +0000 (0:00:01.123) 0:53:27.531 ******** 2026-04-11 06:03:40.754876 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.754887 | orchestrator | 2026-04-11 06:03:40.754898 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 06:03:40.754909 | orchestrator | Saturday 11 April 2026 06:03:32 +0000 (0:00:01.091) 0:53:28.623 ******** 2026-04-11 06:03:40.754919 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.754930 | orchestrator | 2026-04-11 06:03:40.754941 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 06:03:40.754952 | orchestrator | Saturday 11 April 2026 06:03:34 +0000 (0:00:01.935) 0:53:30.558 ******** 2026-04-11 06:03:40.754963 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:03:40.754974 | orchestrator | 2026-04-11 06:03:40.754984 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 06:03:40.754995 | orchestrator | Saturday 11 April 2026 06:03:36 +0000 (0:00:02.218) 0:53:32.777 ******** 2026-04-11 06:03:40.755006 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-11 06:03:40.755017 | orchestrator | 2026-04-11 06:03:40.755027 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 06:03:40.755038 | orchestrator | Saturday 11 April 2026 06:03:37 +0000 (0:00:01.119) 0:53:33.897 ******** 2026-04-11 06:03:40.755049 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.755060 | orchestrator | 2026-04-11 06:03:40.755070 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 06:03:40.755088 | orchestrator | Saturday 11 April 2026 06:03:38 +0000 (0:00:01.136) 0:53:35.033 ******** 2026-04-11 06:03:40.755107 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:03:40.755124 | orchestrator | 2026-04-11 06:03:40.755142 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 06:03:40.755173 | orchestrator | Saturday 11 April 2026 06:03:40 +0000 (0:00:01.207) 0:53:36.240 ******** 2026-04-11 06:03:40.755192 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 06:03:40.755226 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 06:04:28.661675 | orchestrator | 2026-04-11 06:04:28.661795 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 06:04:28.661812 | orchestrator | Saturday 11 April 2026 06:03:41 +0000 (0:00:01.804) 0:53:38.045 ******** 2026-04-11 06:04:28.661824 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:04:28.661836 | orchestrator | 2026-04-11 06:04:28.661848 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 06:04:28.661859 | orchestrator | Saturday 11 April 2026 06:03:43 +0000 (0:00:01.502) 0:53:39.547 ******** 2026-04-11 06:04:28.661870 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.661882 | orchestrator | 2026-04-11 06:04:28.661893 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 06:04:28.661904 | orchestrator | Saturday 11 April 2026 06:03:44 +0000 (0:00:01.203) 0:53:40.751 ******** 2026-04-11 06:04:28.661915 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.661925 | orchestrator | 2026-04-11 06:04:28.661936 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 06:04:28.661947 | orchestrator | Saturday 11 April 2026 06:03:45 +0000 (0:00:01.140) 0:53:41.891 ******** 2026-04-11 06:04:28.661958 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.661969 | orchestrator | 2026-04-11 06:04:28.661980 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 06:04:28.661991 | orchestrator | Saturday 11 April 2026 06:03:46 +0000 (0:00:01.124) 0:53:43.016 ******** 2026-04-11 06:04:28.662002 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-11 06:04:28.662079 | orchestrator | 2026-04-11 06:04:28.662092 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 06:04:28.662103 | orchestrator | Saturday 11 April 2026 06:03:47 +0000 (0:00:01.153) 0:53:44.169 ******** 2026-04-11 06:04:28.662114 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:04:28.662125 | orchestrator | 2026-04-11 06:04:28.662136 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 06:04:28.662181 | orchestrator | Saturday 11 April 2026 06:03:50 +0000 (0:00:02.730) 0:53:46.900 ******** 2026-04-11 06:04:28.662194 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 06:04:28.662207 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 06:04:28.662219 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 06:04:28.662232 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662245 | orchestrator | 2026-04-11 06:04:28.662258 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 06:04:28.662304 | orchestrator | Saturday 11 April 2026 06:03:51 +0000 (0:00:01.164) 0:53:48.065 ******** 2026-04-11 06:04:28.662325 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662345 | orchestrator | 2026-04-11 06:04:28.662364 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 06:04:28.662380 | orchestrator | Saturday 11 April 2026 06:03:53 +0000 (0:00:01.150) 0:53:49.215 ******** 2026-04-11 06:04:28.662391 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662402 | orchestrator | 2026-04-11 06:04:28.662413 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 06:04:28.662424 | orchestrator | Saturday 11 April 2026 06:03:54 +0000 (0:00:01.163) 0:53:50.381 ******** 2026-04-11 06:04:28.662435 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662446 | orchestrator | 2026-04-11 06:04:28.662456 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 06:04:28.662467 | orchestrator | Saturday 11 April 2026 06:03:55 +0000 (0:00:01.164) 0:53:51.545 ******** 2026-04-11 06:04:28.662506 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662517 | orchestrator | 2026-04-11 06:04:28.662541 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 06:04:28.662552 | orchestrator | Saturday 11 April 2026 06:03:56 +0000 (0:00:01.207) 0:53:52.753 ******** 2026-04-11 06:04:28.662563 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662573 | orchestrator | 2026-04-11 06:04:28.662584 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 06:04:28.662594 | orchestrator | Saturday 11 April 2026 06:03:57 +0000 (0:00:01.150) 0:53:53.904 ******** 2026-04-11 06:04:28.662605 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:04:28.662616 | orchestrator | 2026-04-11 06:04:28.662626 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 06:04:28.662637 | orchestrator | Saturday 11 April 2026 06:04:00 +0000 (0:00:02.454) 0:53:56.358 ******** 2026-04-11 06:04:28.662648 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:04:28.662658 | orchestrator | 2026-04-11 06:04:28.662669 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 06:04:28.662705 | orchestrator | Saturday 11 April 2026 06:04:01 +0000 (0:00:01.213) 0:53:57.571 ******** 2026-04-11 06:04:28.662739 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-11 06:04:28.662751 | orchestrator | 2026-04-11 06:04:28.662761 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 06:04:28.662772 | orchestrator | Saturday 11 April 2026 06:04:02 +0000 (0:00:01.116) 0:53:58.688 ******** 2026-04-11 06:04:28.662783 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662794 | orchestrator | 2026-04-11 06:04:28.662805 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 06:04:28.662816 | orchestrator | Saturday 11 April 2026 06:04:03 +0000 (0:00:01.195) 0:53:59.883 ******** 2026-04-11 06:04:28.662826 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662837 | orchestrator | 2026-04-11 06:04:28.662848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 06:04:28.662858 | orchestrator | Saturday 11 April 2026 06:04:04 +0000 (0:00:01.149) 0:54:01.033 ******** 2026-04-11 06:04:28.662869 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662880 | orchestrator | 2026-04-11 06:04:28.662890 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 06:04:28.662920 | orchestrator | Saturday 11 April 2026 06:04:05 +0000 (0:00:01.139) 0:54:02.172 ******** 2026-04-11 06:04:28.662932 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662943 | orchestrator | 2026-04-11 06:04:28.662954 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 06:04:28.662965 | orchestrator | Saturday 11 April 2026 06:04:07 +0000 (0:00:01.148) 0:54:03.320 ******** 2026-04-11 06:04:28.662975 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.662986 | orchestrator | 2026-04-11 06:04:28.662997 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 06:04:28.663007 | orchestrator | Saturday 11 April 2026 06:04:08 +0000 (0:00:01.151) 0:54:04.472 ******** 2026-04-11 06:04:28.663019 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.663037 | orchestrator | 2026-04-11 06:04:28.663060 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 06:04:28.663087 | orchestrator | Saturday 11 April 2026 06:04:09 +0000 (0:00:01.161) 0:54:05.634 ******** 2026-04-11 06:04:28.663103 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.663120 | orchestrator | 2026-04-11 06:04:28.663136 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 06:04:28.663153 | orchestrator | Saturday 11 April 2026 06:04:10 +0000 (0:00:01.165) 0:54:06.801 ******** 2026-04-11 06:04:28.663169 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.663186 | orchestrator | 2026-04-11 06:04:28.663204 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 06:04:28.663237 | orchestrator | Saturday 11 April 2026 06:04:11 +0000 (0:00:01.164) 0:54:07.965 ******** 2026-04-11 06:04:28.663256 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:04:28.663300 | orchestrator | 2026-04-11 06:04:28.663311 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 06:04:28.663322 | orchestrator | Saturday 11 April 2026 06:04:13 +0000 (0:00:01.311) 0:54:09.276 ******** 2026-04-11 06:04:28.663333 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-11 06:04:28.663344 | orchestrator | 2026-04-11 06:04:28.663355 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 06:04:28.663366 | orchestrator | Saturday 11 April 2026 06:04:14 +0000 (0:00:01.193) 0:54:10.470 ******** 2026-04-11 06:04:28.663376 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-11 06:04:28.663387 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-11 06:04:28.663405 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-11 06:04:28.663432 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-11 06:04:28.663451 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-11 06:04:28.663471 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-11 06:04:28.663490 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-11 06:04:28.663508 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-11 06:04:28.663527 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 06:04:28.663544 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 06:04:28.663562 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 06:04:28.663579 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 06:04:28.663598 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 06:04:28.663616 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 06:04:28.663635 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-11 06:04:28.663653 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-11 06:04:28.663670 | orchestrator | 2026-04-11 06:04:28.663698 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 06:04:28.663716 | orchestrator | Saturday 11 April 2026 06:04:20 +0000 (0:00:06.466) 0:54:16.936 ******** 2026-04-11 06:04:28.663734 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-11 06:04:28.663752 | orchestrator | 2026-04-11 06:04:28.663770 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 06:04:28.663787 | orchestrator | Saturday 11 April 2026 06:04:21 +0000 (0:00:01.136) 0:54:18.072 ******** 2026-04-11 06:04:28.663805 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:04:28.663823 | orchestrator | 2026-04-11 06:04:28.663841 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 06:04:28.663859 | orchestrator | Saturday 11 April 2026 06:04:23 +0000 (0:00:01.462) 0:54:19.535 ******** 2026-04-11 06:04:28.663877 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:04:28.663895 | orchestrator | 2026-04-11 06:04:28.663913 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 06:04:28.663931 | orchestrator | Saturday 11 April 2026 06:04:25 +0000 (0:00:01.994) 0:54:21.530 ******** 2026-04-11 06:04:28.663949 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.663967 | orchestrator | 2026-04-11 06:04:28.663984 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 06:04:28.664002 | orchestrator | Saturday 11 April 2026 06:04:26 +0000 (0:00:01.110) 0:54:22.641 ******** 2026-04-11 06:04:28.664032 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.664050 | orchestrator | 2026-04-11 06:04:28.664068 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 06:04:28.664087 | orchestrator | Saturday 11 April 2026 06:04:27 +0000 (0:00:01.114) 0:54:23.756 ******** 2026-04-11 06:04:28.664105 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:04:28.664123 | orchestrator | 2026-04-11 06:04:28.664141 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 06:04:28.664175 | orchestrator | Saturday 11 April 2026 06:04:28 +0000 (0:00:01.108) 0:54:24.864 ******** 2026-04-11 06:05:18.515083 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515172 | orchestrator | 2026-04-11 06:05:18.515182 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 06:05:18.515189 | orchestrator | Saturday 11 April 2026 06:04:29 +0000 (0:00:01.162) 0:54:26.026 ******** 2026-04-11 06:05:18.515195 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515201 | orchestrator | 2026-04-11 06:05:18.515206 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 06:05:18.515213 | orchestrator | Saturday 11 April 2026 06:04:31 +0000 (0:00:01.232) 0:54:27.258 ******** 2026-04-11 06:05:18.515218 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515223 | orchestrator | 2026-04-11 06:05:18.515229 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 06:05:18.515234 | orchestrator | Saturday 11 April 2026 06:04:32 +0000 (0:00:01.147) 0:54:28.406 ******** 2026-04-11 06:05:18.515239 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515244 | orchestrator | 2026-04-11 06:05:18.515249 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 06:05:18.515255 | orchestrator | Saturday 11 April 2026 06:04:33 +0000 (0:00:01.144) 0:54:29.551 ******** 2026-04-11 06:05:18.515260 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515265 | orchestrator | 2026-04-11 06:05:18.515270 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 06:05:18.515317 | orchestrator | Saturday 11 April 2026 06:04:34 +0000 (0:00:01.171) 0:54:30.723 ******** 2026-04-11 06:05:18.515323 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515328 | orchestrator | 2026-04-11 06:05:18.515333 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 06:05:18.515338 | orchestrator | Saturday 11 April 2026 06:04:35 +0000 (0:00:01.140) 0:54:31.864 ******** 2026-04-11 06:05:18.515343 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515348 | orchestrator | 2026-04-11 06:05:18.515354 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 06:05:18.515359 | orchestrator | Saturday 11 April 2026 06:04:36 +0000 (0:00:01.116) 0:54:32.980 ******** 2026-04-11 06:05:18.515364 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515369 | orchestrator | 2026-04-11 06:05:18.515374 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 06:05:18.515379 | orchestrator | Saturday 11 April 2026 06:04:37 +0000 (0:00:01.155) 0:54:34.136 ******** 2026-04-11 06:05:18.515385 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-11 06:05:18.515390 | orchestrator | 2026-04-11 06:05:18.515395 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 06:05:18.515400 | orchestrator | Saturday 11 April 2026 06:04:42 +0000 (0:00:04.505) 0:54:38.642 ******** 2026-04-11 06:05:18.515405 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:05:18.515411 | orchestrator | 2026-04-11 06:05:18.515417 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 06:05:18.515422 | orchestrator | Saturday 11 April 2026 06:04:43 +0000 (0:00:01.216) 0:54:39.859 ******** 2026-04-11 06:05:18.515441 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-11 06:05:18.515466 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-11 06:05:18.515473 | orchestrator | 2026-04-11 06:05:18.515478 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 06:05:18.515483 | orchestrator | Saturday 11 April 2026 06:04:48 +0000 (0:00:04.846) 0:54:44.706 ******** 2026-04-11 06:05:18.515488 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515494 | orchestrator | 2026-04-11 06:05:18.515499 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 06:05:18.515504 | orchestrator | Saturday 11 April 2026 06:04:49 +0000 (0:00:01.224) 0:54:45.930 ******** 2026-04-11 06:05:18.515509 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515514 | orchestrator | 2026-04-11 06:05:18.515519 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:05:18.515524 | orchestrator | Saturday 11 April 2026 06:04:50 +0000 (0:00:01.147) 0:54:47.078 ******** 2026-04-11 06:05:18.515530 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515535 | orchestrator | 2026-04-11 06:05:18.515540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:05:18.515545 | orchestrator | Saturday 11 April 2026 06:04:52 +0000 (0:00:01.166) 0:54:48.245 ******** 2026-04-11 06:05:18.515550 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515555 | orchestrator | 2026-04-11 06:05:18.515560 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:05:18.515565 | orchestrator | Saturday 11 April 2026 06:04:53 +0000 (0:00:01.273) 0:54:49.519 ******** 2026-04-11 06:05:18.515570 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515575 | orchestrator | 2026-04-11 06:05:18.515581 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:05:18.515598 | orchestrator | Saturday 11 April 2026 06:04:54 +0000 (0:00:01.160) 0:54:50.680 ******** 2026-04-11 06:05:18.515604 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:05:18.515610 | orchestrator | 2026-04-11 06:05:18.515615 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:05:18.515621 | orchestrator | Saturday 11 April 2026 06:04:55 +0000 (0:00:01.321) 0:54:52.001 ******** 2026-04-11 06:05:18.515626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:05:18.515632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:05:18.515637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:05:18.515642 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515647 | orchestrator | 2026-04-11 06:05:18.515653 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:05:18.515658 | orchestrator | Saturday 11 April 2026 06:04:57 +0000 (0:00:01.461) 0:54:53.462 ******** 2026-04-11 06:05:18.515664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:05:18.515670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:05:18.515676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:05:18.515683 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515689 | orchestrator | 2026-04-11 06:05:18.515694 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:05:18.515701 | orchestrator | Saturday 11 April 2026 06:04:58 +0000 (0:00:01.493) 0:54:54.956 ******** 2026-04-11 06:05:18.515707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:05:18.515718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:05:18.515724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:05:18.515730 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515736 | orchestrator | 2026-04-11 06:05:18.515742 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:05:18.515748 | orchestrator | Saturday 11 April 2026 06:05:00 +0000 (0:00:01.457) 0:54:56.414 ******** 2026-04-11 06:05:18.515754 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:05:18.515760 | orchestrator | 2026-04-11 06:05:18.515766 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:05:18.515772 | orchestrator | Saturday 11 April 2026 06:05:01 +0000 (0:00:01.209) 0:54:57.623 ******** 2026-04-11 06:05:18.515779 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 06:05:18.515784 | orchestrator | 2026-04-11 06:05:18.515790 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 06:05:18.515796 | orchestrator | Saturday 11 April 2026 06:05:02 +0000 (0:00:01.406) 0:54:59.029 ******** 2026-04-11 06:05:18.515802 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:05:18.515808 | orchestrator | 2026-04-11 06:05:18.515814 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-11 06:05:18.515820 | orchestrator | Saturday 11 April 2026 06:05:04 +0000 (0:00:01.812) 0:55:00.842 ******** 2026-04-11 06:05:18.515826 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.515832 | orchestrator | 2026-04-11 06:05:18.515838 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-11 06:05:18.515844 | orchestrator | Saturday 11 April 2026 06:05:05 +0000 (0:00:01.149) 0:55:01.992 ******** 2026-04-11 06:05:18.515850 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3 2026-04-11 06:05:18.515856 | orchestrator | 2026-04-11 06:05:18.515862 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-11 06:05:18.515868 | orchestrator | Saturday 11 April 2026 06:05:07 +0000 (0:00:01.636) 0:55:03.629 ******** 2026-04-11 06:05:18.515874 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 06:05:18.515880 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-11 06:05:18.515886 | orchestrator | 2026-04-11 06:05:18.515892 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-11 06:05:18.515898 | orchestrator | Saturday 11 April 2026 06:05:09 +0000 (0:00:01.791) 0:55:05.421 ******** 2026-04-11 06:05:18.515905 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:05:18.515911 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 06:05:18.515917 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 06:05:18.515923 | orchestrator | 2026-04-11 06:05:18.515930 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-11 06:05:18.515935 | orchestrator | Saturday 11 April 2026 06:05:12 +0000 (0:00:03.122) 0:55:08.543 ******** 2026-04-11 06:05:18.515942 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-11 06:05:18.515947 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 06:05:18.515952 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:05:18.515957 | orchestrator | 2026-04-11 06:05:18.515963 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-11 06:05:18.515987 | orchestrator | Saturday 11 April 2026 06:05:14 +0000 (0:00:01.969) 0:55:10.513 ******** 2026-04-11 06:05:18.515993 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:05:18.515998 | orchestrator | 2026-04-11 06:05:18.516003 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-11 06:05:18.516009 | orchestrator | Saturday 11 April 2026 06:05:15 +0000 (0:00:01.469) 0:55:11.983 ******** 2026-04-11 06:05:18.516014 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:05:18.516019 | orchestrator | 2026-04-11 06:05:18.516024 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-11 06:05:18.516044 | orchestrator | Saturday 11 April 2026 06:05:16 +0000 (0:00:01.116) 0:55:13.099 ******** 2026-04-11 06:05:18.516049 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3 2026-04-11 06:05:18.516055 | orchestrator | 2026-04-11 06:05:18.516061 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-11 06:05:18.516066 | orchestrator | Saturday 11 April 2026 06:05:18 +0000 (0:00:01.476) 0:55:14.575 ******** 2026-04-11 06:05:18.516075 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3 2026-04-11 06:06:04.285437 | orchestrator | 2026-04-11 06:06:04.285634 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-11 06:06:04.285667 | orchestrator | Saturday 11 April 2026 06:05:19 +0000 (0:00:01.538) 0:55:16.113 ******** 2026-04-11 06:06:04.285689 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:06:04.285710 | orchestrator | 2026-04-11 06:06:04.285729 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-11 06:06:04.285748 | orchestrator | Saturday 11 April 2026 06:05:21 +0000 (0:00:01.960) 0:55:18.074 ******** 2026-04-11 06:06:04.285766 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:06:04.285784 | orchestrator | 2026-04-11 06:06:04.285803 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-11 06:06:04.285823 | orchestrator | Saturday 11 April 2026 06:05:23 +0000 (0:00:02.004) 0:55:20.078 ******** 2026-04-11 06:06:04.285844 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:06:04.285866 | orchestrator | 2026-04-11 06:06:04.285887 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-11 06:06:04.285911 | orchestrator | Saturday 11 April 2026 06:05:26 +0000 (0:00:02.240) 0:55:22.319 ******** 2026-04-11 06:06:04.285932 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:06:04.285955 | orchestrator | 2026-04-11 06:06:04.285978 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-11 06:06:04.286000 | orchestrator | Saturday 11 April 2026 06:05:28 +0000 (0:00:02.373) 0:55:24.692 ******** 2026-04-11 06:06:04.286114 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:06:04.286140 | orchestrator | 2026-04-11 06:06:04.286164 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-04-11 06:06:04.286204 | orchestrator | Saturday 11 April 2026 06:05:30 +0000 (0:00:01.598) 0:55:26.291 ******** 2026-04-11 06:06:04.286226 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:06:04.286248 | orchestrator | 2026-04-11 06:06:04.286268 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-04-11 06:06:04.286320 | orchestrator | Saturday 11 April 2026 06:05:31 +0000 (0:00:01.130) 0:55:27.422 ******** 2026-04-11 06:06:04.286341 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:06:04.286359 | orchestrator | 2026-04-11 06:06:04.286379 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-04-11 06:06:04.286397 | orchestrator | 2026-04-11 06:06:04.286415 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 06:06:04.286432 | orchestrator | Saturday 11 April 2026 06:05:40 +0000 (0:00:09.658) 0:55:37.080 ******** 2026-04-11 06:06:04.286451 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-5 2026-04-11 06:06:04.286471 | orchestrator | 2026-04-11 06:06:04.286488 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 06:06:04.286505 | orchestrator | Saturday 11 April 2026 06:05:42 +0000 (0:00:01.253) 0:55:38.333 ******** 2026-04-11 06:06:04.286522 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:04.286540 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:04.286556 | orchestrator | 2026-04-11 06:06:04.286574 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 06:06:04.286591 | orchestrator | Saturday 11 April 2026 06:05:43 +0000 (0:00:01.549) 0:55:39.882 ******** 2026-04-11 06:06:04.286608 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:04.286626 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:04.286686 | orchestrator | 2026-04-11 06:06:04.286706 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:06:04.286723 | orchestrator | Saturday 11 April 2026 06:05:45 +0000 (0:00:01.701) 0:55:41.584 ******** 2026-04-11 06:06:04.286739 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:04.286755 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:04.286771 | orchestrator | 2026-04-11 06:06:04.286809 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:06:04.286829 | orchestrator | Saturday 11 April 2026 06:05:46 +0000 (0:00:01.612) 0:55:43.197 ******** 2026-04-11 06:06:04.286846 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:04.286863 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:04.286879 | orchestrator | 2026-04-11 06:06:04.286895 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 06:06:04.286912 | orchestrator | Saturday 11 April 2026 06:05:48 +0000 (0:00:01.270) 0:55:44.467 ******** 2026-04-11 06:06:04.286930 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:04.286947 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:04.286964 | orchestrator | 2026-04-11 06:06:04.286981 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 06:06:04.286998 | orchestrator | Saturday 11 April 2026 06:05:49 +0000 (0:00:01.215) 0:55:45.682 ******** 2026-04-11 06:06:04.287015 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:04.287032 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:04.287049 | orchestrator | 2026-04-11 06:06:04.287068 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 06:06:04.287085 | orchestrator | Saturday 11 April 2026 06:05:50 +0000 (0:00:01.254) 0:55:46.937 ******** 2026-04-11 06:06:04.287102 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:04.287121 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:04.287139 | orchestrator | 2026-04-11 06:06:04.287157 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 06:06:04.287174 | orchestrator | Saturday 11 April 2026 06:05:51 +0000 (0:00:01.265) 0:55:48.202 ******** 2026-04-11 06:06:04.287191 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:04.287208 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:04.287226 | orchestrator | 2026-04-11 06:06:04.287244 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 06:06:04.287262 | orchestrator | Saturday 11 April 2026 06:05:53 +0000 (0:00:01.249) 0:55:49.452 ******** 2026-04-11 06:06:04.287279 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:06:04.287332 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:06:04.287351 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:06:04.287369 | orchestrator | 2026-04-11 06:06:04.287388 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 06:06:04.287444 | orchestrator | Saturday 11 April 2026 06:05:55 +0000 (0:00:02.190) 0:55:51.643 ******** 2026-04-11 06:06:04.287500 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:04.287518 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:04.287535 | orchestrator | 2026-04-11 06:06:04.287552 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 06:06:04.287570 | orchestrator | Saturday 11 April 2026 06:05:56 +0000 (0:00:01.450) 0:55:53.093 ******** 2026-04-11 06:06:04.287588 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:06:04.287606 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:06:04.287624 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:06:04.287643 | orchestrator | 2026-04-11 06:06:04.287661 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 06:06:04.287679 | orchestrator | Saturday 11 April 2026 06:05:59 +0000 (0:00:02.944) 0:55:56.037 ******** 2026-04-11 06:06:04.287721 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 06:06:04.287739 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 06:06:04.287756 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 06:06:04.287774 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:04.287789 | orchestrator | 2026-04-11 06:06:04.287804 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 06:06:04.287819 | orchestrator | Saturday 11 April 2026 06:06:01 +0000 (0:00:01.453) 0:55:57.491 ******** 2026-04-11 06:06:04.287838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 06:06:04.287858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 06:06:04.287874 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 06:06:04.287890 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:04.287906 | orchestrator | 2026-04-11 06:06:04.287922 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 06:06:04.287938 | orchestrator | Saturday 11 April 2026 06:06:03 +0000 (0:00:01.729) 0:55:59.221 ******** 2026-04-11 06:06:04.287971 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:04.287994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:04.288011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:04.288027 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:04.288043 | orchestrator | 2026-04-11 06:06:04.288058 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 06:06:04.288073 | orchestrator | Saturday 11 April 2026 06:06:04 +0000 (0:00:01.173) 0:56:00.394 ******** 2026-04-11 06:06:04.288113 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 06:05:57.434782', 'end': '2026-04-11 06:05:57.484537', 'delta': '0:00:00.049755', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 06:06:24.639204 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 06:05:58.029206', 'end': '2026-04-11 06:05:58.086868', 'delta': '0:00:00.057662', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 06:06:24.639392 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 06:05:58.598867', 'end': '2026-04-11 06:05:58.662745', 'delta': '0:00:00.063878', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 06:06:24.639411 | orchestrator | 2026-04-11 06:06:24.639425 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 06:06:24.639438 | orchestrator | Saturday 11 April 2026 06:06:05 +0000 (0:00:01.249) 0:56:01.644 ******** 2026-04-11 06:06:24.639449 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:24.639462 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:24.639473 | orchestrator | 2026-04-11 06:06:24.639484 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 06:06:24.639495 | orchestrator | Saturday 11 April 2026 06:06:06 +0000 (0:00:01.386) 0:56:03.031 ******** 2026-04-11 06:06:24.639506 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:24.639518 | orchestrator | 2026-04-11 06:06:24.639529 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 06:06:24.639540 | orchestrator | Saturday 11 April 2026 06:06:08 +0000 (0:00:01.270) 0:56:04.301 ******** 2026-04-11 06:06:24.639550 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:24.639561 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:24.639572 | orchestrator | 2026-04-11 06:06:24.639602 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 06:06:24.639614 | orchestrator | Saturday 11 April 2026 06:06:09 +0000 (0:00:01.275) 0:56:05.577 ******** 2026-04-11 06:06:24.639625 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:06:24.639636 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:06:24.639647 | orchestrator | 2026-04-11 06:06:24.639658 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:06:24.639669 | orchestrator | Saturday 11 April 2026 06:06:11 +0000 (0:00:02.513) 0:56:08.091 ******** 2026-04-11 06:06:24.639679 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:24.639690 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:24.639701 | orchestrator | 2026-04-11 06:06:24.639714 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 06:06:24.639727 | orchestrator | Saturday 11 April 2026 06:06:13 +0000 (0:00:01.311) 0:56:09.403 ******** 2026-04-11 06:06:24.639740 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:24.639753 | orchestrator | 2026-04-11 06:06:24.639765 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 06:06:24.639778 | orchestrator | Saturday 11 April 2026 06:06:14 +0000 (0:00:01.102) 0:56:10.505 ******** 2026-04-11 06:06:24.639820 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:24.639833 | orchestrator | 2026-04-11 06:06:24.639846 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:06:24.639858 | orchestrator | Saturday 11 April 2026 06:06:15 +0000 (0:00:01.282) 0:56:11.787 ******** 2026-04-11 06:06:24.639871 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:24.639884 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:24.639896 | orchestrator | 2026-04-11 06:06:24.639909 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 06:06:24.639921 | orchestrator | Saturday 11 April 2026 06:06:16 +0000 (0:00:01.296) 0:56:13.083 ******** 2026-04-11 06:06:24.639934 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:24.639948 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:24.639961 | orchestrator | 2026-04-11 06:06:24.639974 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 06:06:24.639986 | orchestrator | Saturday 11 April 2026 06:06:18 +0000 (0:00:01.256) 0:56:14.339 ******** 2026-04-11 06:06:24.639999 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:24.640011 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:24.640025 | orchestrator | 2026-04-11 06:06:24.640038 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 06:06:24.640051 | orchestrator | Saturday 11 April 2026 06:06:19 +0000 (0:00:01.276) 0:56:15.616 ******** 2026-04-11 06:06:24.640064 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:24.640076 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:24.640087 | orchestrator | 2026-04-11 06:06:24.640118 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 06:06:24.640130 | orchestrator | Saturday 11 April 2026 06:06:20 +0000 (0:00:01.279) 0:56:16.896 ******** 2026-04-11 06:06:24.640141 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:24.640152 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:24.640163 | orchestrator | 2026-04-11 06:06:24.640173 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 06:06:24.640184 | orchestrator | Saturday 11 April 2026 06:06:22 +0000 (0:00:01.335) 0:56:18.231 ******** 2026-04-11 06:06:24.640195 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:24.640206 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:24.640216 | orchestrator | 2026-04-11 06:06:24.640227 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 06:06:24.640239 | orchestrator | Saturday 11 April 2026 06:06:23 +0000 (0:00:01.223) 0:56:19.454 ******** 2026-04-11 06:06:24.640250 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:24.640260 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:24.640271 | orchestrator | 2026-04-11 06:06:24.640282 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 06:06:24.640315 | orchestrator | Saturday 11 April 2026 06:06:24 +0000 (0:00:01.255) 0:56:20.710 ******** 2026-04-11 06:06:24.640340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.640356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'uuids': ['9d724d10-77ae-4967-ad2d-00bd58cf4b58'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E']}})  2026-04-11 06:06:24.640384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7ad0a670', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:06:24.640397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855']}})  2026-04-11 06:06:24.640410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.640430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.741800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 06:06:24.741904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.741913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh', 'dm-uuid-CRYPT-LUKS2-f995fcc5d8e74f9b8df633437ec8101a-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:06:24.741940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.741959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'uuids': ['f995fcc5-d8e7-4f9b-8df6-33437ec8101a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh']}})  2026-04-11 06:06:24.741965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2']}})  2026-04-11 06:06:24.741970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.741994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '122e9594', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:06:24.742009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.742056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.742060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'uuids': ['9614ebde-9763-41b8-8070-f8f6acc1ef2b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn']}})  2026-04-11 06:06:24.742065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.742075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '17a8d280', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:06:24.868584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E', 'dm-uuid-CRYPT-LUKS2-9d724d1077ae4967ad2d00bd58cf4b58-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:06:24.868716 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:24.868735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412']}})  2026-04-11 06:06:24.868798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.868813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.868826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 06:06:24.868839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.868851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ', 'dm-uuid-CRYPT-LUKS2-bdcb2384073e4d9c84ce45a3274a4645-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:06:24.868888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.868902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'uuids': ['bdcb2384-073e-4d9c-84ce-45a3274a4645'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ']}})  2026-04-11 06:06:24.868929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056']}})  2026-04-11 06:06:24.868941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:24.868966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a75c226', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:06:26.195696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:26.195833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:06:26.195866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn', 'dm-uuid-CRYPT-LUKS2-9614ebde976341b88070f8f6acc1ef2b-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:06:26.195882 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:26.195897 | orchestrator | 2026-04-11 06:06:26.195910 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 06:06:26.195923 | orchestrator | Saturday 11 April 2026 06:06:25 +0000 (0:00:01.475) 0:56:22.185 ******** 2026-04-11 06:06:26.195937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.195952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'uuids': ['9d724d10-77ae-4967-ad2d-00bd58cf4b58'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.195966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7ad0a670', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.196019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.196052 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.196072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.196092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.196112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.196131 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.196175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'uuids': ['9614ebde-9763-41b8-8070-f8f6acc1ef2b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh', 'dm-uuid-CRYPT-LUKS2-f995fcc5d8e74f9b8df633437ec8101a-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '17a8d280', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261531 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261539 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'uuids': ['f995fcc5-d8e7-4f9b-8df6-33437ec8101a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261594 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261609 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261616 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261631 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.261648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '122e9594', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354180 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354269 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ', 'dm-uuid-CRYPT-LUKS2-bdcb2384073e4d9c84ce45a3274a4645-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354362 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354398 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354417 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E', 'dm-uuid-CRYPT-LUKS2-9d724d1077ae4967ad2d00bd58cf4b58-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354433 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:26.354479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'uuids': ['bdcb2384-073e-4d9c-84ce-45a3274a4645'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:26.354565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a75c226', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:55.067219 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:55.067364 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:55.067383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn', 'dm-uuid-CRYPT-LUKS2-9614ebde976341b88070f8f6acc1ef2b-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:06:55.067396 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:55.067410 | orchestrator | 2026-04-11 06:06:55.067423 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 06:06:55.067435 | orchestrator | Saturday 11 April 2026 06:06:27 +0000 (0:00:01.486) 0:56:23.672 ******** 2026-04-11 06:06:55.067462 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:55.067475 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:55.067486 | orchestrator | 2026-04-11 06:06:55.067497 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 06:06:55.067508 | orchestrator | Saturday 11 April 2026 06:06:29 +0000 (0:00:01.768) 0:56:25.440 ******** 2026-04-11 06:06:55.067518 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:55.067529 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:55.067540 | orchestrator | 2026-04-11 06:06:55.067551 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:06:55.067562 | orchestrator | Saturday 11 April 2026 06:06:30 +0000 (0:00:01.600) 0:56:27.040 ******** 2026-04-11 06:06:55.067572 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:55.067583 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:55.067594 | orchestrator | 2026-04-11 06:06:55.067605 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:06:55.067615 | orchestrator | Saturday 11 April 2026 06:06:32 +0000 (0:00:01.636) 0:56:28.677 ******** 2026-04-11 06:06:55.067627 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.067639 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:55.067649 | orchestrator | 2026-04-11 06:06:55.067660 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:06:55.067671 | orchestrator | Saturday 11 April 2026 06:06:33 +0000 (0:00:01.275) 0:56:29.953 ******** 2026-04-11 06:06:55.067682 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.067720 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:55.067732 | orchestrator | 2026-04-11 06:06:55.067743 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:06:55.067756 | orchestrator | Saturday 11 April 2026 06:06:35 +0000 (0:00:01.461) 0:56:31.414 ******** 2026-04-11 06:06:55.067769 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.067783 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:55.067796 | orchestrator | 2026-04-11 06:06:55.067810 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 06:06:55.067823 | orchestrator | Saturday 11 April 2026 06:06:36 +0000 (0:00:01.321) 0:56:32.735 ******** 2026-04-11 06:06:55.067836 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-11 06:06:55.067851 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-11 06:06:55.067864 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-11 06:06:55.067877 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-11 06:06:55.067890 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-11 06:06:55.067901 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-11 06:06:55.067911 | orchestrator | 2026-04-11 06:06:55.067922 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 06:06:55.067933 | orchestrator | Saturday 11 April 2026 06:06:38 +0000 (0:00:02.145) 0:56:34.881 ******** 2026-04-11 06:06:55.067961 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 06:06:55.067974 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 06:06:55.067984 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 06:06:55.067995 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.068006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 06:06:55.068017 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 06:06:55.068028 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 06:06:55.068038 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:55.068049 | orchestrator | 2026-04-11 06:06:55.068060 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 06:06:55.068071 | orchestrator | Saturday 11 April 2026 06:06:40 +0000 (0:00:01.390) 0:56:36.272 ******** 2026-04-11 06:06:55.068083 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-5 2026-04-11 06:06:55.068095 | orchestrator | 2026-04-11 06:06:55.068106 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:06:55.068118 | orchestrator | Saturday 11 April 2026 06:06:41 +0000 (0:00:01.246) 0:56:37.519 ******** 2026-04-11 06:06:55.068129 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.068140 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:55.068151 | orchestrator | 2026-04-11 06:06:55.068162 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:06:55.068173 | orchestrator | Saturday 11 April 2026 06:06:42 +0000 (0:00:01.254) 0:56:38.774 ******** 2026-04-11 06:06:55.068183 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.068194 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:55.068205 | orchestrator | 2026-04-11 06:06:55.068216 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:06:55.068227 | orchestrator | Saturday 11 April 2026 06:06:43 +0000 (0:00:01.248) 0:56:40.022 ******** 2026-04-11 06:06:55.068238 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.068248 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:06:55.068259 | orchestrator | 2026-04-11 06:06:55.068271 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:06:55.068281 | orchestrator | Saturday 11 April 2026 06:06:45 +0000 (0:00:01.303) 0:56:41.326 ******** 2026-04-11 06:06:55.068378 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:55.068412 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:55.068431 | orchestrator | 2026-04-11 06:06:55.068449 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:06:55.068467 | orchestrator | Saturday 11 April 2026 06:06:46 +0000 (0:00:01.383) 0:56:42.710 ******** 2026-04-11 06:06:55.068485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:06:55.068506 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:06:55.068525 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:06:55.068542 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.068562 | orchestrator | 2026-04-11 06:06:55.068581 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:06:55.068608 | orchestrator | Saturday 11 April 2026 06:06:48 +0000 (0:00:01.777) 0:56:44.488 ******** 2026-04-11 06:06:55.068620 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:06:55.068631 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:06:55.068641 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:06:55.068652 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.068663 | orchestrator | 2026-04-11 06:06:55.068674 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:06:55.068684 | orchestrator | Saturday 11 April 2026 06:06:49 +0000 (0:00:01.457) 0:56:45.946 ******** 2026-04-11 06:06:55.068695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:06:55.068706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:06:55.068716 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:06:55.068727 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:06:55.068738 | orchestrator | 2026-04-11 06:06:55.068749 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:06:55.068760 | orchestrator | Saturday 11 April 2026 06:06:51 +0000 (0:00:01.417) 0:56:47.364 ******** 2026-04-11 06:06:55.068770 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:06:55.068781 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:06:55.068792 | orchestrator | 2026-04-11 06:06:55.068802 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:06:55.068813 | orchestrator | Saturday 11 April 2026 06:06:52 +0000 (0:00:01.290) 0:56:48.654 ******** 2026-04-11 06:06:55.068824 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 06:06:55.068835 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 06:06:55.068846 | orchestrator | 2026-04-11 06:06:55.068857 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 06:06:55.068868 | orchestrator | Saturday 11 April 2026 06:06:53 +0000 (0:00:01.442) 0:56:50.096 ******** 2026-04-11 06:06:55.068878 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:06:55.068889 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:06:55.068900 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:06:55.068911 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 06:06:55.068921 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-11 06:06:55.068932 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:06:55.068953 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:07:40.158995 | orchestrator | 2026-04-11 06:07:40.159110 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 06:07:40.159127 | orchestrator | Saturday 11 April 2026 06:06:56 +0000 (0:00:02.347) 0:56:52.443 ******** 2026-04-11 06:07:40.159139 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:07:40.159151 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:07:40.159186 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:07:40.159198 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 06:07:40.159210 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-11 06:07:40.159221 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:07:40.159232 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:07:40.159243 | orchestrator | 2026-04-11 06:07:40.159254 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-04-11 06:07:40.159265 | orchestrator | Saturday 11 April 2026 06:06:58 +0000 (0:00:02.631) 0:56:55.075 ******** 2026-04-11 06:07:40.159276 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.159289 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.159300 | orchestrator | 2026-04-11 06:07:40.159310 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 06:07:40.159352 | orchestrator | Saturday 11 April 2026 06:07:00 +0000 (0:00:01.238) 0:56:56.314 ******** 2026-04-11 06:07:40.159363 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-5 2026-04-11 06:07:40.159375 | orchestrator | 2026-04-11 06:07:40.159386 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 06:07:40.159396 | orchestrator | Saturday 11 April 2026 06:07:01 +0000 (0:00:01.606) 0:56:57.920 ******** 2026-04-11 06:07:40.159407 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-5 2026-04-11 06:07:40.159418 | orchestrator | 2026-04-11 06:07:40.159429 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 06:07:40.159440 | orchestrator | Saturday 11 April 2026 06:07:02 +0000 (0:00:01.263) 0:56:59.183 ******** 2026-04-11 06:07:40.159451 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.159462 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.159483 | orchestrator | 2026-04-11 06:07:40.159494 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 06:07:40.159505 | orchestrator | Saturday 11 April 2026 06:07:04 +0000 (0:00:01.279) 0:57:00.463 ******** 2026-04-11 06:07:40.159519 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.159532 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.159544 | orchestrator | 2026-04-11 06:07:40.159557 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 06:07:40.159584 | orchestrator | Saturday 11 April 2026 06:07:05 +0000 (0:00:01.638) 0:57:02.102 ******** 2026-04-11 06:07:40.159598 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.159610 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.159623 | orchestrator | 2026-04-11 06:07:40.159636 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 06:07:40.159649 | orchestrator | Saturday 11 April 2026 06:07:07 +0000 (0:00:01.657) 0:57:03.759 ******** 2026-04-11 06:07:40.159662 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.159675 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.159688 | orchestrator | 2026-04-11 06:07:40.159702 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 06:07:40.159715 | orchestrator | Saturday 11 April 2026 06:07:09 +0000 (0:00:02.079) 0:57:05.838 ******** 2026-04-11 06:07:40.159726 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.159737 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.159748 | orchestrator | 2026-04-11 06:07:40.159759 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 06:07:40.159770 | orchestrator | Saturday 11 April 2026 06:07:10 +0000 (0:00:01.309) 0:57:07.148 ******** 2026-04-11 06:07:40.159781 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.159792 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.159802 | orchestrator | 2026-04-11 06:07:40.159823 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 06:07:40.159834 | orchestrator | Saturday 11 April 2026 06:07:12 +0000 (0:00:01.259) 0:57:08.408 ******** 2026-04-11 06:07:40.159845 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.159856 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.159867 | orchestrator | 2026-04-11 06:07:40.159878 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 06:07:40.159889 | orchestrator | Saturday 11 April 2026 06:07:13 +0000 (0:00:01.315) 0:57:09.723 ******** 2026-04-11 06:07:40.159899 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.159910 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.159921 | orchestrator | 2026-04-11 06:07:40.159932 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 06:07:40.159943 | orchestrator | Saturday 11 April 2026 06:07:15 +0000 (0:00:01.646) 0:57:11.370 ******** 2026-04-11 06:07:40.159954 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.159964 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.159975 | orchestrator | 2026-04-11 06:07:40.159986 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 06:07:40.159997 | orchestrator | Saturday 11 April 2026 06:07:16 +0000 (0:00:01.667) 0:57:13.038 ******** 2026-04-11 06:07:40.160008 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160020 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160031 | orchestrator | 2026-04-11 06:07:40.160041 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 06:07:40.160052 | orchestrator | Saturday 11 April 2026 06:07:18 +0000 (0:00:01.297) 0:57:14.336 ******** 2026-04-11 06:07:40.160063 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160093 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160113 | orchestrator | 2026-04-11 06:07:40.160132 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 06:07:40.160156 | orchestrator | Saturday 11 April 2026 06:07:19 +0000 (0:00:01.238) 0:57:15.574 ******** 2026-04-11 06:07:40.160181 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.160198 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.160215 | orchestrator | 2026-04-11 06:07:40.160232 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 06:07:40.160249 | orchestrator | Saturday 11 April 2026 06:07:20 +0000 (0:00:01.250) 0:57:16.825 ******** 2026-04-11 06:07:40.160268 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.160287 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.160305 | orchestrator | 2026-04-11 06:07:40.160346 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 06:07:40.160357 | orchestrator | Saturday 11 April 2026 06:07:21 +0000 (0:00:01.241) 0:57:18.066 ******** 2026-04-11 06:07:40.160368 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.160379 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.160390 | orchestrator | 2026-04-11 06:07:40.160401 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 06:07:40.160412 | orchestrator | Saturday 11 April 2026 06:07:23 +0000 (0:00:01.239) 0:57:19.305 ******** 2026-04-11 06:07:40.160422 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160434 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160444 | orchestrator | 2026-04-11 06:07:40.160455 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 06:07:40.160467 | orchestrator | Saturday 11 April 2026 06:07:24 +0000 (0:00:01.243) 0:57:20.549 ******** 2026-04-11 06:07:40.160477 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160488 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160499 | orchestrator | 2026-04-11 06:07:40.160510 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 06:07:40.160521 | orchestrator | Saturday 11 April 2026 06:07:25 +0000 (0:00:01.255) 0:57:21.804 ******** 2026-04-11 06:07:40.160532 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160543 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160565 | orchestrator | 2026-04-11 06:07:40.160576 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 06:07:40.160586 | orchestrator | Saturday 11 April 2026 06:07:27 +0000 (0:00:01.606) 0:57:23.411 ******** 2026-04-11 06:07:40.160597 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.160608 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.160619 | orchestrator | 2026-04-11 06:07:40.160630 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 06:07:40.160641 | orchestrator | Saturday 11 April 2026 06:07:28 +0000 (0:00:01.286) 0:57:24.697 ******** 2026-04-11 06:07:40.160652 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:07:40.160663 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:07:40.160673 | orchestrator | 2026-04-11 06:07:40.160684 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 06:07:40.160695 | orchestrator | Saturday 11 April 2026 06:07:29 +0000 (0:00:01.239) 0:57:25.937 ******** 2026-04-11 06:07:40.160706 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160717 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160728 | orchestrator | 2026-04-11 06:07:40.160747 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 06:07:40.160758 | orchestrator | Saturday 11 April 2026 06:07:30 +0000 (0:00:01.206) 0:57:27.143 ******** 2026-04-11 06:07:40.160769 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160780 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160791 | orchestrator | 2026-04-11 06:07:40.160802 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 06:07:40.160813 | orchestrator | Saturday 11 April 2026 06:07:32 +0000 (0:00:01.217) 0:57:28.361 ******** 2026-04-11 06:07:40.160823 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160834 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160845 | orchestrator | 2026-04-11 06:07:40.160856 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 06:07:40.160867 | orchestrator | Saturday 11 April 2026 06:07:33 +0000 (0:00:01.222) 0:57:29.583 ******** 2026-04-11 06:07:40.160878 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160889 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160900 | orchestrator | 2026-04-11 06:07:40.160911 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 06:07:40.160922 | orchestrator | Saturday 11 April 2026 06:07:34 +0000 (0:00:01.627) 0:57:31.211 ******** 2026-04-11 06:07:40.160933 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160943 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.160954 | orchestrator | 2026-04-11 06:07:40.160965 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 06:07:40.160976 | orchestrator | Saturday 11 April 2026 06:07:36 +0000 (0:00:01.256) 0:57:32.467 ******** 2026-04-11 06:07:40.160987 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.160998 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.161009 | orchestrator | 2026-04-11 06:07:40.161023 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 06:07:40.161045 | orchestrator | Saturday 11 April 2026 06:07:37 +0000 (0:00:01.223) 0:57:33.690 ******** 2026-04-11 06:07:40.161073 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.161091 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.161109 | orchestrator | 2026-04-11 06:07:40.161127 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 06:07:40.161145 | orchestrator | Saturday 11 April 2026 06:07:38 +0000 (0:00:01.235) 0:57:34.926 ******** 2026-04-11 06:07:40.161164 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.161181 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.161199 | orchestrator | 2026-04-11 06:07:40.161218 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 06:07:40.161236 | orchestrator | Saturday 11 April 2026 06:07:39 +0000 (0:00:01.205) 0:57:36.132 ******** 2026-04-11 06:07:40.161267 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:07:40.161308 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:07:40.161410 | orchestrator | 2026-04-11 06:07:40.161446 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 06:08:25.450594 | orchestrator | Saturday 11 April 2026 06:07:41 +0000 (0:00:01.281) 0:57:37.413 ******** 2026-04-11 06:08:25.450677 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.450685 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.450689 | orchestrator | 2026-04-11 06:08:25.450694 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 06:08:25.450699 | orchestrator | Saturday 11 April 2026 06:07:42 +0000 (0:00:01.307) 0:57:38.721 ******** 2026-04-11 06:08:25.450703 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.450707 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.450711 | orchestrator | 2026-04-11 06:08:25.450715 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 06:08:25.450719 | orchestrator | Saturday 11 April 2026 06:07:43 +0000 (0:00:01.257) 0:57:39.978 ******** 2026-04-11 06:08:25.450723 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.450727 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.450731 | orchestrator | 2026-04-11 06:08:25.450735 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 06:08:25.450739 | orchestrator | Saturday 11 April 2026 06:07:45 +0000 (0:00:01.265) 0:57:41.244 ******** 2026-04-11 06:08:25.450743 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:08:25.450747 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:08:25.450751 | orchestrator | 2026-04-11 06:08:25.450755 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 06:08:25.450759 | orchestrator | Saturday 11 April 2026 06:07:47 +0000 (0:00:02.037) 0:57:43.282 ******** 2026-04-11 06:08:25.450763 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:08:25.450767 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:08:25.450770 | orchestrator | 2026-04-11 06:08:25.450774 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 06:08:25.450778 | orchestrator | Saturday 11 April 2026 06:07:49 +0000 (0:00:02.348) 0:57:45.630 ******** 2026-04-11 06:08:25.450783 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-5 2026-04-11 06:08:25.450787 | orchestrator | 2026-04-11 06:08:25.450791 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 06:08:25.450795 | orchestrator | Saturday 11 April 2026 06:07:50 +0000 (0:00:01.504) 0:57:47.134 ******** 2026-04-11 06:08:25.450798 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.450802 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.450806 | orchestrator | 2026-04-11 06:08:25.450810 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 06:08:25.450814 | orchestrator | Saturday 11 April 2026 06:07:52 +0000 (0:00:01.286) 0:57:48.421 ******** 2026-04-11 06:08:25.450818 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.450822 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.450825 | orchestrator | 2026-04-11 06:08:25.450829 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 06:08:25.450833 | orchestrator | Saturday 11 April 2026 06:07:53 +0000 (0:00:01.244) 0:57:49.665 ******** 2026-04-11 06:08:25.450837 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 06:08:25.450851 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 06:08:25.450856 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 06:08:25.450860 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 06:08:25.450863 | orchestrator | 2026-04-11 06:08:25.450867 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 06:08:25.450871 | orchestrator | Saturday 11 April 2026 06:07:55 +0000 (0:00:01.951) 0:57:51.617 ******** 2026-04-11 06:08:25.450888 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:08:25.450892 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:08:25.450895 | orchestrator | 2026-04-11 06:08:25.450899 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 06:08:25.450903 | orchestrator | Saturday 11 April 2026 06:07:56 +0000 (0:00:01.558) 0:57:53.176 ******** 2026-04-11 06:08:25.450907 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.450911 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.450915 | orchestrator | 2026-04-11 06:08:25.450918 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 06:08:25.450922 | orchestrator | Saturday 11 April 2026 06:07:58 +0000 (0:00:01.279) 0:57:54.456 ******** 2026-04-11 06:08:25.450926 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.450930 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.450934 | orchestrator | 2026-04-11 06:08:25.450937 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 06:08:25.450941 | orchestrator | Saturday 11 April 2026 06:07:59 +0000 (0:00:01.394) 0:57:55.851 ******** 2026-04-11 06:08:25.450945 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.450949 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.450953 | orchestrator | 2026-04-11 06:08:25.450956 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 06:08:25.450960 | orchestrator | Saturday 11 April 2026 06:08:00 +0000 (0:00:01.231) 0:57:57.082 ******** 2026-04-11 06:08:25.450964 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-5 2026-04-11 06:08:25.450968 | orchestrator | 2026-04-11 06:08:25.450972 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 06:08:25.450975 | orchestrator | Saturday 11 April 2026 06:08:02 +0000 (0:00:01.235) 0:57:58.318 ******** 2026-04-11 06:08:25.450979 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:08:25.450983 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:08:25.450987 | orchestrator | 2026-04-11 06:08:25.450991 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 06:08:25.450995 | orchestrator | Saturday 11 April 2026 06:08:03 +0000 (0:00:01.799) 0:58:00.118 ******** 2026-04-11 06:08:25.450999 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 06:08:25.451013 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 06:08:25.451017 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 06:08:25.451021 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451025 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 06:08:25.451029 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 06:08:25.451033 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 06:08:25.451037 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451040 | orchestrator | 2026-04-11 06:08:25.451044 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 06:08:25.451048 | orchestrator | Saturday 11 April 2026 06:08:05 +0000 (0:00:01.292) 0:58:01.411 ******** 2026-04-11 06:08:25.451052 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451056 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451059 | orchestrator | 2026-04-11 06:08:25.451063 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 06:08:25.451067 | orchestrator | Saturday 11 April 2026 06:08:06 +0000 (0:00:01.247) 0:58:02.658 ******** 2026-04-11 06:08:25.451071 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451074 | orchestrator | 2026-04-11 06:08:25.451078 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 06:08:25.451082 | orchestrator | Saturday 11 April 2026 06:08:07 +0000 (0:00:01.142) 0:58:03.801 ******** 2026-04-11 06:08:25.451089 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451093 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451097 | orchestrator | 2026-04-11 06:08:25.451101 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 06:08:25.451105 | orchestrator | Saturday 11 April 2026 06:08:09 +0000 (0:00:01.622) 0:58:05.424 ******** 2026-04-11 06:08:25.451108 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451112 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451116 | orchestrator | 2026-04-11 06:08:25.451120 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 06:08:25.451123 | orchestrator | Saturday 11 April 2026 06:08:10 +0000 (0:00:01.291) 0:58:06.715 ******** 2026-04-11 06:08:25.451127 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451131 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451135 | orchestrator | 2026-04-11 06:08:25.451138 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 06:08:25.451142 | orchestrator | Saturday 11 April 2026 06:08:11 +0000 (0:00:01.276) 0:58:07.992 ******** 2026-04-11 06:08:25.451146 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:08:25.451157 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:08:25.451161 | orchestrator | 2026-04-11 06:08:25.451165 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 06:08:25.451169 | orchestrator | Saturday 11 April 2026 06:08:14 +0000 (0:00:02.676) 0:58:10.669 ******** 2026-04-11 06:08:25.451178 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:08:25.451182 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:08:25.451185 | orchestrator | 2026-04-11 06:08:25.451192 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 06:08:25.451196 | orchestrator | Saturday 11 April 2026 06:08:15 +0000 (0:00:01.276) 0:58:11.946 ******** 2026-04-11 06:08:25.451199 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-5 2026-04-11 06:08:25.451204 | orchestrator | 2026-04-11 06:08:25.451207 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 06:08:25.451211 | orchestrator | Saturday 11 April 2026 06:08:17 +0000 (0:00:01.411) 0:58:13.358 ******** 2026-04-11 06:08:25.451215 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451219 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451223 | orchestrator | 2026-04-11 06:08:25.451226 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 06:08:25.451230 | orchestrator | Saturday 11 April 2026 06:08:18 +0000 (0:00:01.340) 0:58:14.699 ******** 2026-04-11 06:08:25.451234 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451238 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451241 | orchestrator | 2026-04-11 06:08:25.451245 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 06:08:25.451249 | orchestrator | Saturday 11 April 2026 06:08:19 +0000 (0:00:01.265) 0:58:15.965 ******** 2026-04-11 06:08:25.451253 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451257 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451260 | orchestrator | 2026-04-11 06:08:25.451264 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 06:08:25.451268 | orchestrator | Saturday 11 April 2026 06:08:21 +0000 (0:00:01.344) 0:58:17.310 ******** 2026-04-11 06:08:25.451272 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451275 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451279 | orchestrator | 2026-04-11 06:08:25.451283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 06:08:25.451287 | orchestrator | Saturday 11 April 2026 06:08:22 +0000 (0:00:01.216) 0:58:18.526 ******** 2026-04-11 06:08:25.451290 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451294 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451298 | orchestrator | 2026-04-11 06:08:25.451302 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 06:08:25.451310 | orchestrator | Saturday 11 April 2026 06:08:23 +0000 (0:00:01.262) 0:58:19.789 ******** 2026-04-11 06:08:25.451313 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451317 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451321 | orchestrator | 2026-04-11 06:08:25.451325 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 06:08:25.451329 | orchestrator | Saturday 11 April 2026 06:08:24 +0000 (0:00:01.276) 0:58:21.065 ******** 2026-04-11 06:08:25.451332 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:08:25.451336 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:08:25.451340 | orchestrator | 2026-04-11 06:08:25.451346 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 06:09:04.641931 | orchestrator | Saturday 11 April 2026 06:08:26 +0000 (0:00:01.780) 0:58:22.845 ******** 2026-04-11 06:09:04.642109 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.642130 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.642142 | orchestrator | 2026-04-11 06:09:04.642155 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 06:09:04.642167 | orchestrator | Saturday 11 April 2026 06:08:27 +0000 (0:00:01.226) 0:58:24.072 ******** 2026-04-11 06:09:04.642178 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:04.642190 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:04.642201 | orchestrator | 2026-04-11 06:09:04.642212 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 06:09:04.642223 | orchestrator | Saturday 11 April 2026 06:08:29 +0000 (0:00:01.297) 0:58:25.370 ******** 2026-04-11 06:09:04.642235 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-5 2026-04-11 06:09:04.642246 | orchestrator | 2026-04-11 06:09:04.642257 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 06:09:04.642268 | orchestrator | Saturday 11 April 2026 06:08:30 +0000 (0:00:01.226) 0:58:26.597 ******** 2026-04-11 06:09:04.642278 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-11 06:09:04.642290 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-11 06:09:04.642301 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-11 06:09:04.642312 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-11 06:09:04.642323 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-11 06:09:04.642334 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-11 06:09:04.642344 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-11 06:09:04.642355 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-11 06:09:04.642366 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-11 06:09:04.642376 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-11 06:09:04.642387 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-11 06:09:04.642398 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-11 06:09:04.642409 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-11 06:09:04.642419 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-11 06:09:04.642430 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-11 06:09:04.642441 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-11 06:09:04.642452 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 06:09:04.642465 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 06:09:04.642478 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 06:09:04.642491 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 06:09:04.642519 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 06:09:04.642533 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 06:09:04.642570 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 06:09:04.642581 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 06:09:04.642592 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 06:09:04.642602 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 06:09:04.642613 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 06:09:04.642624 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 06:09:04.642635 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-11 06:09:04.642645 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-11 06:09:04.642713 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-11 06:09:04.642726 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-11 06:09:04.642737 | orchestrator | 2026-04-11 06:09:04.642762 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 06:09:04.642774 | orchestrator | Saturday 11 April 2026 06:08:37 +0000 (0:00:06.739) 0:58:33.336 ******** 2026-04-11 06:09:04.642784 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-5 2026-04-11 06:09:04.642796 | orchestrator | 2026-04-11 06:09:04.642807 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 06:09:04.642829 | orchestrator | Saturday 11 April 2026 06:08:38 +0000 (0:00:01.235) 0:58:34.572 ******** 2026-04-11 06:09:04.642842 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:09:04.642855 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:09:04.642866 | orchestrator | 2026-04-11 06:09:04.642877 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 06:09:04.642888 | orchestrator | Saturday 11 April 2026 06:08:39 +0000 (0:00:01.605) 0:58:36.178 ******** 2026-04-11 06:09:04.642899 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:09:04.642910 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:09:04.642921 | orchestrator | 2026-04-11 06:09:04.642932 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 06:09:04.642960 | orchestrator | Saturday 11 April 2026 06:08:42 +0000 (0:00:02.118) 0:58:38.297 ******** 2026-04-11 06:09:04.642972 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.642983 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.642994 | orchestrator | 2026-04-11 06:09:04.643005 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 06:09:04.643016 | orchestrator | Saturday 11 April 2026 06:08:43 +0000 (0:00:01.233) 0:58:39.531 ******** 2026-04-11 06:09:04.643026 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643037 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643048 | orchestrator | 2026-04-11 06:09:04.643059 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 06:09:04.643070 | orchestrator | Saturday 11 April 2026 06:08:44 +0000 (0:00:01.269) 0:58:40.800 ******** 2026-04-11 06:09:04.643081 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643092 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643102 | orchestrator | 2026-04-11 06:09:04.643113 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 06:09:04.643124 | orchestrator | Saturday 11 April 2026 06:08:46 +0000 (0:00:01.602) 0:58:42.403 ******** 2026-04-11 06:09:04.643135 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643146 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643157 | orchestrator | 2026-04-11 06:09:04.643168 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 06:09:04.643188 | orchestrator | Saturday 11 April 2026 06:08:47 +0000 (0:00:01.264) 0:58:43.668 ******** 2026-04-11 06:09:04.643199 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643210 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643221 | orchestrator | 2026-04-11 06:09:04.643232 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 06:09:04.643243 | orchestrator | Saturday 11 April 2026 06:08:48 +0000 (0:00:01.298) 0:58:44.967 ******** 2026-04-11 06:09:04.643254 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643265 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643276 | orchestrator | 2026-04-11 06:09:04.643287 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 06:09:04.643298 | orchestrator | Saturday 11 April 2026 06:08:50 +0000 (0:00:01.266) 0:58:46.233 ******** 2026-04-11 06:09:04.643309 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643320 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643330 | orchestrator | 2026-04-11 06:09:04.643341 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 06:09:04.643352 | orchestrator | Saturday 11 April 2026 06:08:51 +0000 (0:00:01.284) 0:58:47.518 ******** 2026-04-11 06:09:04.643363 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643374 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643385 | orchestrator | 2026-04-11 06:09:04.643396 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 06:09:04.643407 | orchestrator | Saturday 11 April 2026 06:08:52 +0000 (0:00:01.214) 0:58:48.733 ******** 2026-04-11 06:09:04.643423 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643435 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643446 | orchestrator | 2026-04-11 06:09:04.643457 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 06:09:04.643468 | orchestrator | Saturday 11 April 2026 06:08:53 +0000 (0:00:01.232) 0:58:49.966 ******** 2026-04-11 06:09:04.643478 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643489 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643500 | orchestrator | 2026-04-11 06:09:04.643511 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 06:09:04.643522 | orchestrator | Saturday 11 April 2026 06:08:55 +0000 (0:00:01.693) 0:58:51.659 ******** 2026-04-11 06:09:04.643533 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:04.643544 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:04.643555 | orchestrator | 2026-04-11 06:09:04.643566 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 06:09:04.643577 | orchestrator | Saturday 11 April 2026 06:08:56 +0000 (0:00:01.233) 0:58:52.893 ******** 2026-04-11 06:09:04.643588 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-11 06:09:04.643599 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-11 06:09:04.643610 | orchestrator | 2026-04-11 06:09:04.643621 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 06:09:04.643631 | orchestrator | Saturday 11 April 2026 06:09:01 +0000 (0:00:04.572) 0:58:57.465 ******** 2026-04-11 06:09:04.643642 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:09:04.643653 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:09:04.643687 | orchestrator | 2026-04-11 06:09:04.643698 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 06:09:04.643709 | orchestrator | Saturday 11 April 2026 06:09:02 +0000 (0:00:01.302) 0:58:58.768 ******** 2026-04-11 06:09:04.643722 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-11 06:09:04.643752 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-11 06:09:54.711120 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-11 06:09:54.711234 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-11 06:09:54.711250 | orchestrator | 2026-04-11 06:09:54.711264 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 06:09:54.711277 | orchestrator | Saturday 11 April 2026 06:09:07 +0000 (0:00:05.008) 0:59:03.777 ******** 2026-04-11 06:09:54.711289 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.711302 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:54.711313 | orchestrator | 2026-04-11 06:09:54.711324 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 06:09:54.711335 | orchestrator | Saturday 11 April 2026 06:09:08 +0000 (0:00:01.413) 0:59:05.191 ******** 2026-04-11 06:09:54.711346 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.711357 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:54.711368 | orchestrator | 2026-04-11 06:09:54.711380 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:09:54.711392 | orchestrator | Saturday 11 April 2026 06:09:10 +0000 (0:00:01.297) 0:59:06.488 ******** 2026-04-11 06:09:54.711403 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.711414 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:54.711425 | orchestrator | 2026-04-11 06:09:54.711436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:09:54.711447 | orchestrator | Saturday 11 April 2026 06:09:11 +0000 (0:00:01.316) 0:59:07.805 ******** 2026-04-11 06:09:54.711458 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.711469 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:54.711480 | orchestrator | 2026-04-11 06:09:54.711491 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:09:54.711502 | orchestrator | Saturday 11 April 2026 06:09:12 +0000 (0:00:01.307) 0:59:09.112 ******** 2026-04-11 06:09:54.711513 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.711524 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:54.711535 | orchestrator | 2026-04-11 06:09:54.711561 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:09:54.711573 | orchestrator | Saturday 11 April 2026 06:09:14 +0000 (0:00:01.283) 0:59:10.396 ******** 2026-04-11 06:09:54.711584 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.711596 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.711607 | orchestrator | 2026-04-11 06:09:54.711618 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:09:54.711629 | orchestrator | Saturday 11 April 2026 06:09:15 +0000 (0:00:01.417) 0:59:11.814 ******** 2026-04-11 06:09:54.711640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:09:54.711676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:09:54.711689 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:09:54.711702 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.711715 | orchestrator | 2026-04-11 06:09:54.711728 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:09:54.711740 | orchestrator | Saturday 11 April 2026 06:09:17 +0000 (0:00:01.418) 0:59:13.232 ******** 2026-04-11 06:09:54.711753 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:09:54.711765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:09:54.711778 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:09:54.711790 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.711803 | orchestrator | 2026-04-11 06:09:54.711816 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:09:54.711829 | orchestrator | Saturday 11 April 2026 06:09:18 +0000 (0:00:01.396) 0:59:14.629 ******** 2026-04-11 06:09:54.711872 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:09:54.711892 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:09:54.711912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:09:54.711932 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.711952 | orchestrator | 2026-04-11 06:09:54.711967 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:09:54.711980 | orchestrator | Saturday 11 April 2026 06:09:20 +0000 (0:00:01.853) 0:59:16.483 ******** 2026-04-11 06:09:54.711992 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.712006 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.712019 | orchestrator | 2026-04-11 06:09:54.712032 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:09:54.712045 | orchestrator | Saturday 11 April 2026 06:09:21 +0000 (0:00:01.340) 0:59:17.823 ******** 2026-04-11 06:09:54.712056 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 06:09:54.712067 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 06:09:54.712078 | orchestrator | 2026-04-11 06:09:54.712089 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 06:09:54.712099 | orchestrator | Saturday 11 April 2026 06:09:23 +0000 (0:00:01.482) 0:59:19.306 ******** 2026-04-11 06:09:54.712110 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.712120 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.712131 | orchestrator | 2026-04-11 06:09:54.712160 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-11 06:09:54.712172 | orchestrator | Saturday 11 April 2026 06:09:25 +0000 (0:00:01.968) 0:59:21.275 ******** 2026-04-11 06:09:54.712182 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.712193 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:54.712204 | orchestrator | 2026-04-11 06:09:54.712215 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-11 06:09:54.712226 | orchestrator | Saturday 11 April 2026 06:09:26 +0000 (0:00:01.206) 0:59:22.481 ******** 2026-04-11 06:09:54.712236 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-5 2026-04-11 06:09:54.712248 | orchestrator | 2026-04-11 06:09:54.712258 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-11 06:09:54.712269 | orchestrator | Saturday 11 April 2026 06:09:27 +0000 (0:00:01.394) 0:59:23.876 ******** 2026-04-11 06:09:54.712279 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 06:09:54.712290 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-11 06:09:54.712300 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-11 06:09:54.712311 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-11 06:09:54.712322 | orchestrator | 2026-04-11 06:09:54.712332 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-11 06:09:54.712353 | orchestrator | Saturday 11 April 2026 06:09:29 +0000 (0:00:02.023) 0:59:25.899 ******** 2026-04-11 06:09:54.712363 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:09:54.712374 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 06:09:54.712385 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 06:09:54.712396 | orchestrator | 2026-04-11 06:09:54.712406 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-11 06:09:54.712417 | orchestrator | Saturday 11 April 2026 06:09:32 +0000 (0:00:03.246) 0:59:29.146 ******** 2026-04-11 06:09:54.712428 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-11 06:09:54.712439 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 06:09:54.712449 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.712460 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-11 06:09:54.712471 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 06:09:54.712481 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.712492 | orchestrator | 2026-04-11 06:09:54.712503 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-11 06:09:54.712513 | orchestrator | Saturday 11 April 2026 06:09:35 +0000 (0:00:02.137) 0:59:31.283 ******** 2026-04-11 06:09:54.712524 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.712535 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.712545 | orchestrator | 2026-04-11 06:09:54.712562 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-11 06:09:54.712573 | orchestrator | Saturday 11 April 2026 06:09:36 +0000 (0:00:01.599) 0:59:32.883 ******** 2026-04-11 06:09:54.712584 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.712595 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:09:54.712606 | orchestrator | 2026-04-11 06:09:54.712617 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-11 06:09:54.712627 | orchestrator | Saturday 11 April 2026 06:09:37 +0000 (0:00:01.213) 0:59:34.097 ******** 2026-04-11 06:09:54.712638 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-5 2026-04-11 06:09:54.712649 | orchestrator | 2026-04-11 06:09:54.712659 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-11 06:09:54.712670 | orchestrator | Saturday 11 April 2026 06:09:39 +0000 (0:00:01.440) 0:59:35.537 ******** 2026-04-11 06:09:54.712680 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-5 2026-04-11 06:09:54.712691 | orchestrator | 2026-04-11 06:09:54.712702 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-11 06:09:54.712712 | orchestrator | Saturday 11 April 2026 06:09:40 +0000 (0:00:01.242) 0:59:36.779 ******** 2026-04-11 06:09:54.712723 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.712734 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.712745 | orchestrator | 2026-04-11 06:09:54.712755 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-11 06:09:54.712766 | orchestrator | Saturday 11 April 2026 06:09:42 +0000 (0:00:02.110) 0:59:38.890 ******** 2026-04-11 06:09:54.712776 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.712787 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.712798 | orchestrator | 2026-04-11 06:09:54.712808 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-11 06:09:54.712819 | orchestrator | Saturday 11 April 2026 06:09:44 +0000 (0:00:02.018) 0:59:40.909 ******** 2026-04-11 06:09:54.712830 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.712896 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.712910 | orchestrator | 2026-04-11 06:09:54.712921 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-11 06:09:54.712932 | orchestrator | Saturday 11 April 2026 06:09:46 +0000 (0:00:02.300) 0:59:43.210 ******** 2026-04-11 06:09:54.712942 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:09:54.712964 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:09:54.712975 | orchestrator | 2026-04-11 06:09:54.712985 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-11 06:09:54.712996 | orchestrator | Saturday 11 April 2026 06:09:50 +0000 (0:00:03.448) 0:59:46.658 ******** 2026-04-11 06:09:54.713007 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:09:54.713017 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:09:54.713028 | orchestrator | 2026-04-11 06:09:54.713039 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-04-11 06:09:54.713049 | orchestrator | Saturday 11 April 2026 06:09:52 +0000 (0:00:02.223) 0:59:48.881 ******** 2026-04-11 06:09:54.713060 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:09:54.713078 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:10:17.751661 | orchestrator | 2026-04-11 06:10:17.751778 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-11 06:10:17.751795 | orchestrator | 2026-04-11 06:10:17.751807 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 06:10:17.751818 | orchestrator | Saturday 11 April 2026 06:09:55 +0000 (0:00:03.324) 0:59:52.206 ******** 2026-04-11 06:10:17.751829 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-11 06:10:17.751840 | orchestrator | 2026-04-11 06:10:17.751851 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 06:10:17.751862 | orchestrator | Saturday 11 April 2026 06:09:57 +0000 (0:00:01.127) 0:59:53.333 ******** 2026-04-11 06:10:17.751873 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:17.751885 | orchestrator | 2026-04-11 06:10:17.751896 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 06:10:17.751907 | orchestrator | Saturday 11 April 2026 06:09:58 +0000 (0:00:01.474) 0:59:54.808 ******** 2026-04-11 06:10:17.751977 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:17.751990 | orchestrator | 2026-04-11 06:10:17.752001 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:10:17.752012 | orchestrator | Saturday 11 April 2026 06:09:59 +0000 (0:00:01.142) 0:59:55.951 ******** 2026-04-11 06:10:17.752023 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:17.752033 | orchestrator | 2026-04-11 06:10:17.752044 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:10:17.752055 | orchestrator | Saturday 11 April 2026 06:10:01 +0000 (0:00:01.463) 0:59:57.415 ******** 2026-04-11 06:10:17.752066 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:17.752076 | orchestrator | 2026-04-11 06:10:17.752087 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 06:10:17.752098 | orchestrator | Saturday 11 April 2026 06:10:02 +0000 (0:00:01.172) 0:59:58.588 ******** 2026-04-11 06:10:17.752109 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:17.752120 | orchestrator | 2026-04-11 06:10:17.752131 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 06:10:17.752142 | orchestrator | Saturday 11 April 2026 06:10:03 +0000 (0:00:01.193) 0:59:59.782 ******** 2026-04-11 06:10:17.752153 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:17.752163 | orchestrator | 2026-04-11 06:10:17.752174 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 06:10:17.752187 | orchestrator | Saturday 11 April 2026 06:10:04 +0000 (0:00:01.186) 1:00:00.969 ******** 2026-04-11 06:10:17.752200 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:17.752213 | orchestrator | 2026-04-11 06:10:17.752226 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 06:10:17.752238 | orchestrator | Saturday 11 April 2026 06:10:05 +0000 (0:00:01.183) 1:00:02.153 ******** 2026-04-11 06:10:17.752250 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:17.752263 | orchestrator | 2026-04-11 06:10:17.752292 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 06:10:17.752304 | orchestrator | Saturday 11 April 2026 06:10:07 +0000 (0:00:01.133) 1:00:03.286 ******** 2026-04-11 06:10:17.752341 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:10:17.752354 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:10:17.752367 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:10:17.752379 | orchestrator | 2026-04-11 06:10:17.752391 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 06:10:17.752403 | orchestrator | Saturday 11 April 2026 06:10:08 +0000 (0:00:01.640) 1:00:04.927 ******** 2026-04-11 06:10:17.752416 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:17.752428 | orchestrator | 2026-04-11 06:10:17.752440 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 06:10:17.752453 | orchestrator | Saturday 11 April 2026 06:10:09 +0000 (0:00:01.208) 1:00:06.135 ******** 2026-04-11 06:10:17.752465 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:10:17.752477 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:10:17.752490 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:10:17.752502 | orchestrator | 2026-04-11 06:10:17.752515 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 06:10:17.752528 | orchestrator | Saturday 11 April 2026 06:10:12 +0000 (0:00:02.793) 1:00:08.930 ******** 2026-04-11 06:10:17.752542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 06:10:17.752554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 06:10:17.752564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 06:10:17.752575 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:17.752586 | orchestrator | 2026-04-11 06:10:17.752596 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 06:10:17.752607 | orchestrator | Saturday 11 April 2026 06:10:14 +0000 (0:00:01.462) 1:00:10.392 ******** 2026-04-11 06:10:17.752620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 06:10:17.752634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 06:10:17.752663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 06:10:17.752675 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:17.752686 | orchestrator | 2026-04-11 06:10:17.752697 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 06:10:17.752707 | orchestrator | Saturday 11 April 2026 06:10:16 +0000 (0:00:02.148) 1:00:12.540 ******** 2026-04-11 06:10:17.752720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:17.752735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:17.752754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:17.752765 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:17.752776 | orchestrator | 2026-04-11 06:10:17.752787 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 06:10:17.752798 | orchestrator | Saturday 11 April 2026 06:10:17 +0000 (0:00:01.184) 1:00:13.724 ******** 2026-04-11 06:10:17.752815 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 06:10:10.428648', 'end': '2026-04-11 06:10:10.471992', 'delta': '0:00:00.043344', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 06:10:17.752830 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 06:10:10.982870', 'end': '2026-04-11 06:10:11.032748', 'delta': '0:00:00.049878', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 06:10:17.752842 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 06:10:11.513082', 'end': '2026-04-11 06:10:11.549212', 'delta': '0:00:00.036130', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 06:10:17.752854 | orchestrator | 2026-04-11 06:10:17.752871 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 06:10:36.424511 | orchestrator | Saturday 11 April 2026 06:10:18 +0000 (0:00:01.242) 1:00:14.967 ******** 2026-04-11 06:10:36.424633 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:36.424651 | orchestrator | 2026-04-11 06:10:36.424665 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 06:10:36.424677 | orchestrator | Saturday 11 April 2026 06:10:19 +0000 (0:00:01.225) 1:00:16.192 ******** 2026-04-11 06:10:36.424689 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:36.424700 | orchestrator | 2026-04-11 06:10:36.424712 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 06:10:36.424723 | orchestrator | Saturday 11 April 2026 06:10:21 +0000 (0:00:01.280) 1:00:17.473 ******** 2026-04-11 06:10:36.424734 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:36.424769 | orchestrator | 2026-04-11 06:10:36.424781 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 06:10:36.424791 | orchestrator | Saturday 11 April 2026 06:10:22 +0000 (0:00:01.227) 1:00:18.701 ******** 2026-04-11 06:10:36.424802 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:10:36.424813 | orchestrator | 2026-04-11 06:10:36.424824 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:10:36.424835 | orchestrator | Saturday 11 April 2026 06:10:24 +0000 (0:00:02.015) 1:00:20.717 ******** 2026-04-11 06:10:36.424846 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:36.424857 | orchestrator | 2026-04-11 06:10:36.424867 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 06:10:36.424878 | orchestrator | Saturday 11 April 2026 06:10:25 +0000 (0:00:01.218) 1:00:21.936 ******** 2026-04-11 06:10:36.424889 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:36.424900 | orchestrator | 2026-04-11 06:10:36.424910 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 06:10:36.424921 | orchestrator | Saturday 11 April 2026 06:10:26 +0000 (0:00:01.152) 1:00:23.089 ******** 2026-04-11 06:10:36.424932 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:36.424943 | orchestrator | 2026-04-11 06:10:36.424954 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:10:36.424964 | orchestrator | Saturday 11 April 2026 06:10:28 +0000 (0:00:01.215) 1:00:24.304 ******** 2026-04-11 06:10:36.424975 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:36.425039 | orchestrator | 2026-04-11 06:10:36.425052 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 06:10:36.425064 | orchestrator | Saturday 11 April 2026 06:10:29 +0000 (0:00:01.137) 1:00:25.441 ******** 2026-04-11 06:10:36.425076 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:36.425089 | orchestrator | 2026-04-11 06:10:36.425101 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 06:10:36.425128 | orchestrator | Saturday 11 April 2026 06:10:30 +0000 (0:00:01.135) 1:00:26.577 ******** 2026-04-11 06:10:36.425141 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:36.425154 | orchestrator | 2026-04-11 06:10:36.425166 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 06:10:36.425179 | orchestrator | Saturday 11 April 2026 06:10:31 +0000 (0:00:01.157) 1:00:27.735 ******** 2026-04-11 06:10:36.425191 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:36.425203 | orchestrator | 2026-04-11 06:10:36.425216 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 06:10:36.425228 | orchestrator | Saturday 11 April 2026 06:10:32 +0000 (0:00:01.142) 1:00:28.877 ******** 2026-04-11 06:10:36.425241 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:36.425254 | orchestrator | 2026-04-11 06:10:36.425266 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 06:10:36.425278 | orchestrator | Saturday 11 April 2026 06:10:33 +0000 (0:00:01.188) 1:00:30.065 ******** 2026-04-11 06:10:36.425291 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:36.425303 | orchestrator | 2026-04-11 06:10:36.425316 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 06:10:36.425329 | orchestrator | Saturday 11 April 2026 06:10:35 +0000 (0:00:01.197) 1:00:31.263 ******** 2026-04-11 06:10:36.425343 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:10:36.425355 | orchestrator | 2026-04-11 06:10:36.425368 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 06:10:36.425381 | orchestrator | Saturday 11 April 2026 06:10:36 +0000 (0:00:01.173) 1:00:32.436 ******** 2026-04-11 06:10:36.425396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:10:36.425421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'uuids': ['5687e399-36a2-4cfe-ae2f-5c9610714106'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG']}})  2026-04-11 06:10:36.425456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d9c4f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:10:36.425471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003']}})  2026-04-11 06:10:36.425484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:10:36.425502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:10:36.425515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 06:10:36.425528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:10:36.425546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ', 'dm-uuid-CRYPT-LUKS2-4ce930e6d90647c5bf5f978d8b977bd0-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:10:36.425566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:10:37.723138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'uuids': ['4ce930e6-d906-47c5-bf5f-978d8b977bd0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ']}})  2026-04-11 06:10:37.723252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200']}})  2026-04-11 06:10:37.723286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:10:37.723306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f54fce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:10:37.723362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:10:37.723377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:10:37.723389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG', 'dm-uuid-CRYPT-LUKS2-5687e39936a24cfeae2f5c9610714106-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:10:37.723417 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:10:37.723440 | orchestrator | 2026-04-11 06:10:37.723452 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 06:10:37.723465 | orchestrator | Saturday 11 April 2026 06:10:37 +0000 (0:00:01.359) 1:00:33.795 ******** 2026-04-11 06:10:37.723482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.723496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200', 'dm-uuid-LVM-Bzm8veJ8WajxWE1rbQG3D6L1YQ7NRJWE5nYLkJZ3j15jpE3LHjt0hSXc3WZuWEzG'], 'uuids': ['5687e399-36a2-4cfe-ae2f-5c9610714106'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.723518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7', 'scsi-SQEMU_QEMU_HARDDISK_7d9c4f1c-d40b-45bb-8e87-01db3fc808d7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7d9c4f1c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.723539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PQeocr-BDfK-Omm3-UVAY-4ZFi-qC83-UyfjmY', 'scsi-0QEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c', 'scsi-SQEMU_QEMU_HARDDISK_a5d3052c-abdd-49f3-bb0e-d9386ad7b01c'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849645 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849772 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-28-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ', 'dm-uuid-CRYPT-LUKS2-4ce930e6d90647c5bf5f978d8b977bd0-LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c5955808--db0e--564c--b1b7--e2d336084003-osd--block--c5955808--db0e--564c--b1b7--e2d336084003', 'dm-uuid-LVM-pkDfTbVQleSwcS4k7Dh9BVsoBeNfZTa2LK4cAT3noeZwIltxQlTmbG23aNcLYOeQ'], 'uuids': ['4ce930e6-d906-47c5-bf5f-978d8b977bd0'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a5d3052c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LK4cAT-3noe-ZwIl-txQl-TmbG-23aN-cLYOeQ']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ESicMG-Y3he-y5ZC-yq3K-67sS-s0jj-bJ518K', 'scsi-0QEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898', 'scsi-SQEMU_QEMU_HARDDISK_16023bbf-7f58-4b7d-abd1-681ece48f898'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '16023bbf', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6808ea3d--3e7e--5ef0--9dd2--f9487250f200-osd--block--6808ea3d--3e7e--5ef0--9dd2--f9487250f200']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:10:37.849949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f54fce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f54fce7-d818-40b6-a511-c244d10d845a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:11:06.636550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:11:06.636729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:11:06.636759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG', 'dm-uuid-CRYPT-LUKS2-5687e39936a24cfeae2f5c9610714106-5nYLkJ-Z3j1-5jpE-3LHj-t0hS-Xc3W-ZuWEzG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:11:06.636781 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.636802 | orchestrator | 2026-04-11 06:11:06.636822 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 06:11:06.636841 | orchestrator | Saturday 11 April 2026 06:10:39 +0000 (0:00:01.506) 1:00:35.302 ******** 2026-04-11 06:11:06.636860 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:11:06.636878 | orchestrator | 2026-04-11 06:11:06.636896 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 06:11:06.636915 | orchestrator | Saturday 11 April 2026 06:10:40 +0000 (0:00:01.483) 1:00:36.785 ******** 2026-04-11 06:11:06.636932 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:11:06.636949 | orchestrator | 2026-04-11 06:11:06.636967 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:11:06.636987 | orchestrator | Saturday 11 April 2026 06:10:41 +0000 (0:00:01.138) 1:00:37.924 ******** 2026-04-11 06:11:06.637005 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:11:06.637025 | orchestrator | 2026-04-11 06:11:06.637043 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:11:06.637062 | orchestrator | Saturday 11 April 2026 06:10:43 +0000 (0:00:01.448) 1:00:39.373 ******** 2026-04-11 06:11:06.637115 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.637135 | orchestrator | 2026-04-11 06:11:06.637154 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:11:06.637165 | orchestrator | Saturday 11 April 2026 06:10:44 +0000 (0:00:01.122) 1:00:40.496 ******** 2026-04-11 06:11:06.637176 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.637187 | orchestrator | 2026-04-11 06:11:06.637198 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:11:06.637211 | orchestrator | Saturday 11 April 2026 06:10:45 +0000 (0:00:01.280) 1:00:41.776 ******** 2026-04-11 06:11:06.637230 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.637248 | orchestrator | 2026-04-11 06:11:06.637266 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 06:11:06.637285 | orchestrator | Saturday 11 April 2026 06:10:46 +0000 (0:00:01.159) 1:00:42.935 ******** 2026-04-11 06:11:06.637302 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-11 06:11:06.637321 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-11 06:11:06.637340 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-11 06:11:06.637359 | orchestrator | 2026-04-11 06:11:06.637377 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 06:11:06.637396 | orchestrator | Saturday 11 April 2026 06:10:48 +0000 (0:00:01.755) 1:00:44.691 ******** 2026-04-11 06:11:06.637434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-11 06:11:06.637453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-11 06:11:06.637472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-11 06:11:06.637489 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.637505 | orchestrator | 2026-04-11 06:11:06.637522 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 06:11:06.637539 | orchestrator | Saturday 11 April 2026 06:10:49 +0000 (0:00:01.176) 1:00:45.867 ******** 2026-04-11 06:11:06.637584 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-11 06:11:06.637604 | orchestrator | 2026-04-11 06:11:06.637622 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:11:06.637653 | orchestrator | Saturday 11 April 2026 06:10:50 +0000 (0:00:01.179) 1:00:47.047 ******** 2026-04-11 06:11:06.637673 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.637691 | orchestrator | 2026-04-11 06:11:06.637711 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:11:06.637730 | orchestrator | Saturday 11 April 2026 06:10:52 +0000 (0:00:01.167) 1:00:48.215 ******** 2026-04-11 06:11:06.637748 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.637765 | orchestrator | 2026-04-11 06:11:06.637782 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:11:06.637800 | orchestrator | Saturday 11 April 2026 06:10:53 +0000 (0:00:01.155) 1:00:49.370 ******** 2026-04-11 06:11:06.637818 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.637836 | orchestrator | 2026-04-11 06:11:06.637853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:11:06.637872 | orchestrator | Saturday 11 April 2026 06:10:54 +0000 (0:00:01.228) 1:00:50.599 ******** 2026-04-11 06:11:06.637892 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:11:06.637910 | orchestrator | 2026-04-11 06:11:06.637929 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:11:06.637948 | orchestrator | Saturday 11 April 2026 06:10:55 +0000 (0:00:01.244) 1:00:51.843 ******** 2026-04-11 06:11:06.637967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:11:06.637985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:11:06.638004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:11:06.638136 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.638150 | orchestrator | 2026-04-11 06:11:06.638161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:11:06.638172 | orchestrator | Saturday 11 April 2026 06:10:57 +0000 (0:00:01.430) 1:00:53.274 ******** 2026-04-11 06:11:06.638182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:11:06.638233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:11:06.638245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:11:06.638256 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.638267 | orchestrator | 2026-04-11 06:11:06.638278 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:11:06.638289 | orchestrator | Saturday 11 April 2026 06:10:58 +0000 (0:00:01.465) 1:00:54.740 ******** 2026-04-11 06:11:06.638299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:11:06.638310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:11:06.638321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:11:06.638332 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:11:06.638342 | orchestrator | 2026-04-11 06:11:06.638353 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:11:06.638364 | orchestrator | Saturday 11 April 2026 06:10:59 +0000 (0:00:01.448) 1:00:56.188 ******** 2026-04-11 06:11:06.638390 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:11:06.638401 | orchestrator | 2026-04-11 06:11:06.638411 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:11:06.638422 | orchestrator | Saturday 11 April 2026 06:11:01 +0000 (0:00:01.282) 1:00:57.471 ******** 2026-04-11 06:11:06.638433 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 06:11:06.638444 | orchestrator | 2026-04-11 06:11:06.638454 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 06:11:06.638465 | orchestrator | Saturday 11 April 2026 06:11:02 +0000 (0:00:01.394) 1:00:58.866 ******** 2026-04-11 06:11:06.638476 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:11:06.638487 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:11:06.638498 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:11:06.638509 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 06:11:06.638519 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:11:06.638530 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:11:06.638541 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:11:06.638552 | orchestrator | 2026-04-11 06:11:06.638563 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 06:11:06.638574 | orchestrator | Saturday 11 April 2026 06:11:04 +0000 (0:00:02.228) 1:01:01.095 ******** 2026-04-11 06:11:06.638584 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:11:06.638595 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:11:06.638606 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:11:06.638616 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-11 06:11:06.638627 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:11:06.638638 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:11:06.638648 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:11:06.638659 | orchestrator | 2026-04-11 06:11:06.638683 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-11 06:12:01.980422 | orchestrator | Saturday 11 April 2026 06:11:07 +0000 (0:00:02.692) 1:01:03.787 ******** 2026-04-11 06:12:01.980540 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:12:01.980556 | orchestrator | 2026-04-11 06:12:01.980569 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-11 06:12:01.980595 | orchestrator | Saturday 11 April 2026 06:11:09 +0000 (0:00:02.232) 1:01:06.020 ******** 2026-04-11 06:12:01.980608 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:12:01.980620 | orchestrator | 2026-04-11 06:12:01.980631 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-11 06:12:01.980642 | orchestrator | Saturday 11 April 2026 06:11:12 +0000 (0:00:02.951) 1:01:08.971 ******** 2026-04-11 06:12:01.980653 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:12:01.980664 | orchestrator | 2026-04-11 06:12:01.980675 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 06:12:01.980685 | orchestrator | Saturday 11 April 2026 06:11:15 +0000 (0:00:02.271) 1:01:11.243 ******** 2026-04-11 06:12:01.980696 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-11 06:12:01.980707 | orchestrator | 2026-04-11 06:12:01.980740 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 06:12:01.980751 | orchestrator | Saturday 11 April 2026 06:11:16 +0000 (0:00:01.315) 1:01:12.559 ******** 2026-04-11 06:12:01.980762 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-11 06:12:01.980773 | orchestrator | 2026-04-11 06:12:01.980783 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 06:12:01.980794 | orchestrator | Saturday 11 April 2026 06:11:17 +0000 (0:00:01.225) 1:01:13.785 ******** 2026-04-11 06:12:01.980805 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.980816 | orchestrator | 2026-04-11 06:12:01.980828 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 06:12:01.980838 | orchestrator | Saturday 11 April 2026 06:11:18 +0000 (0:00:01.163) 1:01:14.948 ******** 2026-04-11 06:12:01.980849 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.980861 | orchestrator | 2026-04-11 06:12:01.980871 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 06:12:01.980882 | orchestrator | Saturday 11 April 2026 06:11:20 +0000 (0:00:01.506) 1:01:16.454 ******** 2026-04-11 06:12:01.980892 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.980903 | orchestrator | 2026-04-11 06:12:01.980914 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 06:12:01.980925 | orchestrator | Saturday 11 April 2026 06:11:21 +0000 (0:00:01.567) 1:01:18.021 ******** 2026-04-11 06:12:01.980935 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.980946 | orchestrator | 2026-04-11 06:12:01.980957 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 06:12:01.980968 | orchestrator | Saturday 11 April 2026 06:11:23 +0000 (0:00:01.528) 1:01:19.550 ******** 2026-04-11 06:12:01.980978 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.980989 | orchestrator | 2026-04-11 06:12:01.980999 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 06:12:01.981010 | orchestrator | Saturday 11 April 2026 06:11:24 +0000 (0:00:01.156) 1:01:20.707 ******** 2026-04-11 06:12:01.981021 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981031 | orchestrator | 2026-04-11 06:12:01.981042 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 06:12:01.981053 | orchestrator | Saturday 11 April 2026 06:11:25 +0000 (0:00:01.147) 1:01:21.854 ******** 2026-04-11 06:12:01.981063 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981074 | orchestrator | 2026-04-11 06:12:01.981085 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 06:12:01.981095 | orchestrator | Saturday 11 April 2026 06:11:26 +0000 (0:00:01.127) 1:01:22.982 ******** 2026-04-11 06:12:01.981106 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.981117 | orchestrator | 2026-04-11 06:12:01.981127 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 06:12:01.981138 | orchestrator | Saturday 11 April 2026 06:11:28 +0000 (0:00:01.531) 1:01:24.513 ******** 2026-04-11 06:12:01.981149 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.981159 | orchestrator | 2026-04-11 06:12:01.981170 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 06:12:01.981180 | orchestrator | Saturday 11 April 2026 06:11:29 +0000 (0:00:01.556) 1:01:26.070 ******** 2026-04-11 06:12:01.981191 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981202 | orchestrator | 2026-04-11 06:12:01.981212 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 06:12:01.981223 | orchestrator | Saturday 11 April 2026 06:11:31 +0000 (0:00:01.199) 1:01:27.270 ******** 2026-04-11 06:12:01.981234 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981280 | orchestrator | 2026-04-11 06:12:01.981291 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 06:12:01.981302 | orchestrator | Saturday 11 April 2026 06:11:32 +0000 (0:00:01.141) 1:01:28.411 ******** 2026-04-11 06:12:01.981313 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.981331 | orchestrator | 2026-04-11 06:12:01.981342 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 06:12:01.981353 | orchestrator | Saturday 11 April 2026 06:11:33 +0000 (0:00:01.231) 1:01:29.644 ******** 2026-04-11 06:12:01.981363 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.981374 | orchestrator | 2026-04-11 06:12:01.981384 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 06:12:01.981395 | orchestrator | Saturday 11 April 2026 06:11:34 +0000 (0:00:01.185) 1:01:30.830 ******** 2026-04-11 06:12:01.981406 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.981416 | orchestrator | 2026-04-11 06:12:01.981444 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 06:12:01.981455 | orchestrator | Saturday 11 April 2026 06:11:35 +0000 (0:00:01.175) 1:01:32.005 ******** 2026-04-11 06:12:01.981466 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981477 | orchestrator | 2026-04-11 06:12:01.981488 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 06:12:01.981504 | orchestrator | Saturday 11 April 2026 06:11:36 +0000 (0:00:01.136) 1:01:33.142 ******** 2026-04-11 06:12:01.981515 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981525 | orchestrator | 2026-04-11 06:12:01.981536 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 06:12:01.981546 | orchestrator | Saturday 11 April 2026 06:11:38 +0000 (0:00:01.177) 1:01:34.319 ******** 2026-04-11 06:12:01.981557 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981568 | orchestrator | 2026-04-11 06:12:01.981578 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 06:12:01.981588 | orchestrator | Saturday 11 April 2026 06:11:39 +0000 (0:00:01.116) 1:01:35.436 ******** 2026-04-11 06:12:01.981599 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.981610 | orchestrator | 2026-04-11 06:12:01.981620 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 06:12:01.981631 | orchestrator | Saturday 11 April 2026 06:11:40 +0000 (0:00:01.162) 1:01:36.598 ******** 2026-04-11 06:12:01.981642 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.981652 | orchestrator | 2026-04-11 06:12:01.981662 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 06:12:01.981673 | orchestrator | Saturday 11 April 2026 06:11:41 +0000 (0:00:01.165) 1:01:37.763 ******** 2026-04-11 06:12:01.981684 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981694 | orchestrator | 2026-04-11 06:12:01.981705 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 06:12:01.981715 | orchestrator | Saturday 11 April 2026 06:11:42 +0000 (0:00:01.120) 1:01:38.884 ******** 2026-04-11 06:12:01.981726 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981737 | orchestrator | 2026-04-11 06:12:01.981747 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 06:12:01.981758 | orchestrator | Saturday 11 April 2026 06:11:43 +0000 (0:00:01.214) 1:01:40.098 ******** 2026-04-11 06:12:01.981768 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981779 | orchestrator | 2026-04-11 06:12:01.981789 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 06:12:01.981800 | orchestrator | Saturday 11 April 2026 06:11:45 +0000 (0:00:01.263) 1:01:41.361 ******** 2026-04-11 06:12:01.981811 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981821 | orchestrator | 2026-04-11 06:12:01.981832 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 06:12:01.981842 | orchestrator | Saturday 11 April 2026 06:11:46 +0000 (0:00:01.562) 1:01:42.924 ******** 2026-04-11 06:12:01.981853 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981863 | orchestrator | 2026-04-11 06:12:01.981874 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 06:12:01.981885 | orchestrator | Saturday 11 April 2026 06:11:47 +0000 (0:00:01.186) 1:01:44.111 ******** 2026-04-11 06:12:01.981895 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981913 | orchestrator | 2026-04-11 06:12:01.981924 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 06:12:01.981935 | orchestrator | Saturday 11 April 2026 06:11:49 +0000 (0:00:01.143) 1:01:45.255 ******** 2026-04-11 06:12:01.981945 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981956 | orchestrator | 2026-04-11 06:12:01.981966 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 06:12:01.981978 | orchestrator | Saturday 11 April 2026 06:11:50 +0000 (0:00:01.121) 1:01:46.377 ******** 2026-04-11 06:12:01.981988 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.981999 | orchestrator | 2026-04-11 06:12:01.982009 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 06:12:01.982082 | orchestrator | Saturday 11 April 2026 06:11:51 +0000 (0:00:01.223) 1:01:47.600 ******** 2026-04-11 06:12:01.982093 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.982104 | orchestrator | 2026-04-11 06:12:01.982115 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 06:12:01.982126 | orchestrator | Saturday 11 April 2026 06:11:52 +0000 (0:00:01.212) 1:01:48.812 ******** 2026-04-11 06:12:01.982136 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.982147 | orchestrator | 2026-04-11 06:12:01.982157 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 06:12:01.982168 | orchestrator | Saturday 11 April 2026 06:11:53 +0000 (0:00:01.214) 1:01:50.027 ******** 2026-04-11 06:12:01.982178 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.982189 | orchestrator | 2026-04-11 06:12:01.982200 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 06:12:01.982210 | orchestrator | Saturday 11 April 2026 06:11:55 +0000 (0:00:01.207) 1:01:51.235 ******** 2026-04-11 06:12:01.982221 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:01.982231 | orchestrator | 2026-04-11 06:12:01.982242 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 06:12:01.982271 | orchestrator | Saturday 11 April 2026 06:11:56 +0000 (0:00:01.201) 1:01:52.436 ******** 2026-04-11 06:12:01.982282 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.982292 | orchestrator | 2026-04-11 06:12:01.982303 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 06:12:01.982313 | orchestrator | Saturday 11 April 2026 06:11:58 +0000 (0:00:02.026) 1:01:54.462 ******** 2026-04-11 06:12:01.982324 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:01.982335 | orchestrator | 2026-04-11 06:12:01.982345 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 06:12:01.982356 | orchestrator | Saturday 11 April 2026 06:12:00 +0000 (0:00:02.314) 1:01:56.776 ******** 2026-04-11 06:12:01.982367 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-11 06:12:01.982377 | orchestrator | 2026-04-11 06:12:01.982388 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 06:12:01.982406 | orchestrator | Saturday 11 April 2026 06:12:01 +0000 (0:00:01.405) 1:01:58.182 ******** 2026-04-11 06:12:49.811758 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.811881 | orchestrator | 2026-04-11 06:12:49.811899 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 06:12:49.811908 | orchestrator | Saturday 11 April 2026 06:12:03 +0000 (0:00:01.172) 1:01:59.355 ******** 2026-04-11 06:12:49.811930 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.811937 | orchestrator | 2026-04-11 06:12:49.811944 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 06:12:49.811951 | orchestrator | Saturday 11 April 2026 06:12:04 +0000 (0:00:01.150) 1:02:00.505 ******** 2026-04-11 06:12:49.811957 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 06:12:49.811964 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 06:12:49.811972 | orchestrator | 2026-04-11 06:12:49.811994 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 06:12:49.812001 | orchestrator | Saturday 11 April 2026 06:12:06 +0000 (0:00:01.987) 1:02:02.493 ******** 2026-04-11 06:12:49.812007 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:49.812015 | orchestrator | 2026-04-11 06:12:49.812032 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 06:12:49.812039 | orchestrator | Saturday 11 April 2026 06:12:07 +0000 (0:00:01.494) 1:02:03.988 ******** 2026-04-11 06:12:49.812053 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812059 | orchestrator | 2026-04-11 06:12:49.812066 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 06:12:49.812072 | orchestrator | Saturday 11 April 2026 06:12:08 +0000 (0:00:01.183) 1:02:05.172 ******** 2026-04-11 06:12:49.812079 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812085 | orchestrator | 2026-04-11 06:12:49.812092 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 06:12:49.812099 | orchestrator | Saturday 11 April 2026 06:12:10 +0000 (0:00:01.199) 1:02:06.371 ******** 2026-04-11 06:12:49.812105 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812112 | orchestrator | 2026-04-11 06:12:49.812118 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 06:12:49.812125 | orchestrator | Saturday 11 April 2026 06:12:11 +0000 (0:00:01.187) 1:02:07.559 ******** 2026-04-11 06:12:49.812132 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-11 06:12:49.812139 | orchestrator | 2026-04-11 06:12:49.812146 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 06:12:49.812156 | orchestrator | Saturday 11 April 2026 06:12:12 +0000 (0:00:01.133) 1:02:08.693 ******** 2026-04-11 06:12:49.812168 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:49.812180 | orchestrator | 2026-04-11 06:12:49.812191 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 06:12:49.812202 | orchestrator | Saturday 11 April 2026 06:12:14 +0000 (0:00:01.713) 1:02:10.406 ******** 2026-04-11 06:12:49.812213 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 06:12:49.812224 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 06:12:49.812236 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 06:12:49.812248 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812260 | orchestrator | 2026-04-11 06:12:49.812270 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 06:12:49.812282 | orchestrator | Saturday 11 April 2026 06:12:15 +0000 (0:00:01.211) 1:02:11.618 ******** 2026-04-11 06:12:49.812290 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812298 | orchestrator | 2026-04-11 06:12:49.812306 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 06:12:49.812314 | orchestrator | Saturday 11 April 2026 06:12:16 +0000 (0:00:01.154) 1:02:12.772 ******** 2026-04-11 06:12:49.812321 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812329 | orchestrator | 2026-04-11 06:12:49.812336 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 06:12:49.812344 | orchestrator | Saturday 11 April 2026 06:12:17 +0000 (0:00:01.399) 1:02:14.172 ******** 2026-04-11 06:12:49.812351 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812359 | orchestrator | 2026-04-11 06:12:49.812366 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 06:12:49.812374 | orchestrator | Saturday 11 April 2026 06:12:19 +0000 (0:00:01.224) 1:02:15.397 ******** 2026-04-11 06:12:49.812402 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812410 | orchestrator | 2026-04-11 06:12:49.812418 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 06:12:49.812426 | orchestrator | Saturday 11 April 2026 06:12:20 +0000 (0:00:01.143) 1:02:16.540 ******** 2026-04-11 06:12:49.812441 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812448 | orchestrator | 2026-04-11 06:12:49.812456 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 06:12:49.812463 | orchestrator | Saturday 11 April 2026 06:12:21 +0000 (0:00:01.200) 1:02:17.741 ******** 2026-04-11 06:12:49.812470 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:49.812478 | orchestrator | 2026-04-11 06:12:49.812486 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 06:12:49.812493 | orchestrator | Saturday 11 April 2026 06:12:24 +0000 (0:00:02.484) 1:02:20.225 ******** 2026-04-11 06:12:49.812501 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:49.812508 | orchestrator | 2026-04-11 06:12:49.812516 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 06:12:49.812523 | orchestrator | Saturday 11 April 2026 06:12:25 +0000 (0:00:01.168) 1:02:21.394 ******** 2026-04-11 06:12:49.812531 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-11 06:12:49.812538 | orchestrator | 2026-04-11 06:12:49.812546 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 06:12:49.812569 | orchestrator | Saturday 11 April 2026 06:12:26 +0000 (0:00:01.180) 1:02:22.575 ******** 2026-04-11 06:12:49.812577 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812585 | orchestrator | 2026-04-11 06:12:49.812592 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 06:12:49.812604 | orchestrator | Saturday 11 April 2026 06:12:27 +0000 (0:00:01.162) 1:02:23.737 ******** 2026-04-11 06:12:49.812613 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812620 | orchestrator | 2026-04-11 06:12:49.812628 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 06:12:49.812635 | orchestrator | Saturday 11 April 2026 06:12:28 +0000 (0:00:01.241) 1:02:24.979 ******** 2026-04-11 06:12:49.812642 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812649 | orchestrator | 2026-04-11 06:12:49.812656 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 06:12:49.812662 | orchestrator | Saturday 11 April 2026 06:12:29 +0000 (0:00:01.177) 1:02:26.157 ******** 2026-04-11 06:12:49.812669 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812676 | orchestrator | 2026-04-11 06:12:49.812682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 06:12:49.812689 | orchestrator | Saturday 11 April 2026 06:12:31 +0000 (0:00:01.169) 1:02:27.326 ******** 2026-04-11 06:12:49.812695 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812702 | orchestrator | 2026-04-11 06:12:49.812708 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 06:12:49.812715 | orchestrator | Saturday 11 April 2026 06:12:32 +0000 (0:00:01.151) 1:02:28.478 ******** 2026-04-11 06:12:49.812722 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812728 | orchestrator | 2026-04-11 06:12:49.812735 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 06:12:49.812741 | orchestrator | Saturday 11 April 2026 06:12:33 +0000 (0:00:01.454) 1:02:29.932 ******** 2026-04-11 06:12:49.812748 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812755 | orchestrator | 2026-04-11 06:12:49.812761 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 06:12:49.812768 | orchestrator | Saturday 11 April 2026 06:12:34 +0000 (0:00:01.188) 1:02:31.121 ******** 2026-04-11 06:12:49.812774 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:12:49.812781 | orchestrator | 2026-04-11 06:12:49.812787 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 06:12:49.812794 | orchestrator | Saturday 11 April 2026 06:12:36 +0000 (0:00:01.213) 1:02:32.334 ******** 2026-04-11 06:12:49.812801 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:12:49.812807 | orchestrator | 2026-04-11 06:12:49.812814 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 06:12:49.812820 | orchestrator | Saturday 11 April 2026 06:12:37 +0000 (0:00:01.250) 1:02:33.585 ******** 2026-04-11 06:12:49.812833 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-11 06:12:49.812839 | orchestrator | 2026-04-11 06:12:49.812846 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 06:12:49.812853 | orchestrator | Saturday 11 April 2026 06:12:38 +0000 (0:00:01.167) 1:02:34.753 ******** 2026-04-11 06:12:49.812859 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-11 06:12:49.812867 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-11 06:12:49.812873 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-11 06:12:49.812880 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-11 06:12:49.812887 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-11 06:12:49.812893 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-11 06:12:49.812900 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-11 06:12:49.812907 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-11 06:12:49.812914 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 06:12:49.812921 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 06:12:49.812927 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 06:12:49.812934 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 06:12:49.812941 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 06:12:49.812947 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 06:12:49.812954 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-11 06:12:49.812960 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-11 06:12:49.812967 | orchestrator | 2026-04-11 06:12:49.812974 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 06:12:49.812980 | orchestrator | Saturday 11 April 2026 06:12:45 +0000 (0:00:06.496) 1:02:41.249 ******** 2026-04-11 06:12:49.812987 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-11 06:12:49.812993 | orchestrator | 2026-04-11 06:12:49.813000 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 06:12:49.813007 | orchestrator | Saturday 11 April 2026 06:12:46 +0000 (0:00:01.182) 1:02:42.431 ******** 2026-04-11 06:12:49.813013 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:12:49.813021 | orchestrator | 2026-04-11 06:12:49.813028 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 06:12:49.813034 | orchestrator | Saturday 11 April 2026 06:12:47 +0000 (0:00:01.549) 1:02:43.980 ******** 2026-04-11 06:12:49.813041 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:12:49.813048 | orchestrator | 2026-04-11 06:12:49.813054 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 06:12:49.813065 | orchestrator | Saturday 11 April 2026 06:12:49 +0000 (0:00:02.031) 1:02:46.012 ******** 2026-04-11 06:13:39.320428 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.320620 | orchestrator | 2026-04-11 06:13:39.320641 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 06:13:39.320671 | orchestrator | Saturday 11 April 2026 06:12:51 +0000 (0:00:01.245) 1:02:47.258 ******** 2026-04-11 06:13:39.320683 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.320694 | orchestrator | 2026-04-11 06:13:39.320705 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 06:13:39.320716 | orchestrator | Saturday 11 April 2026 06:12:52 +0000 (0:00:01.256) 1:02:48.514 ******** 2026-04-11 06:13:39.320727 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.320760 | orchestrator | 2026-04-11 06:13:39.320772 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 06:13:39.320782 | orchestrator | Saturday 11 April 2026 06:12:53 +0000 (0:00:01.144) 1:02:49.659 ******** 2026-04-11 06:13:39.320793 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.320803 | orchestrator | 2026-04-11 06:13:39.320814 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 06:13:39.320825 | orchestrator | Saturday 11 April 2026 06:12:54 +0000 (0:00:01.119) 1:02:50.779 ******** 2026-04-11 06:13:39.320835 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.320846 | orchestrator | 2026-04-11 06:13:39.320856 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 06:13:39.320868 | orchestrator | Saturday 11 April 2026 06:12:55 +0000 (0:00:01.133) 1:02:51.913 ******** 2026-04-11 06:13:39.320879 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.320890 | orchestrator | 2026-04-11 06:13:39.320901 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 06:13:39.320912 | orchestrator | Saturday 11 April 2026 06:12:56 +0000 (0:00:01.162) 1:02:53.075 ******** 2026-04-11 06:13:39.320923 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.320933 | orchestrator | 2026-04-11 06:13:39.320944 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 06:13:39.320956 | orchestrator | Saturday 11 April 2026 06:12:58 +0000 (0:00:01.188) 1:02:54.264 ******** 2026-04-11 06:13:39.320969 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.320981 | orchestrator | 2026-04-11 06:13:39.320993 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 06:13:39.321005 | orchestrator | Saturday 11 April 2026 06:12:59 +0000 (0:00:01.142) 1:02:55.407 ******** 2026-04-11 06:13:39.321017 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321030 | orchestrator | 2026-04-11 06:13:39.321042 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 06:13:39.321054 | orchestrator | Saturday 11 April 2026 06:13:00 +0000 (0:00:01.148) 1:02:56.556 ******** 2026-04-11 06:13:39.321066 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321078 | orchestrator | 2026-04-11 06:13:39.321090 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 06:13:39.321103 | orchestrator | Saturday 11 April 2026 06:13:01 +0000 (0:00:01.183) 1:02:57.740 ******** 2026-04-11 06:13:39.321115 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321127 | orchestrator | 2026-04-11 06:13:39.321139 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 06:13:39.321151 | orchestrator | Saturday 11 April 2026 06:13:02 +0000 (0:00:01.150) 1:02:58.890 ******** 2026-04-11 06:13:39.321164 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-11 06:13:39.321176 | orchestrator | 2026-04-11 06:13:39.321188 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 06:13:39.321200 | orchestrator | Saturday 11 April 2026 06:13:07 +0000 (0:00:04.472) 1:03:03.363 ******** 2026-04-11 06:13:39.321213 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:13:39.321226 | orchestrator | 2026-04-11 06:13:39.321239 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 06:13:39.321251 | orchestrator | Saturday 11 April 2026 06:13:08 +0000 (0:00:01.198) 1:03:04.562 ******** 2026-04-11 06:13:39.321267 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-11 06:13:39.321283 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-11 06:13:39.321308 | orchestrator | 2026-04-11 06:13:39.321321 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 06:13:39.321332 | orchestrator | Saturday 11 April 2026 06:13:13 +0000 (0:00:04.794) 1:03:09.356 ******** 2026-04-11 06:13:39.321342 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321353 | orchestrator | 2026-04-11 06:13:39.321364 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 06:13:39.321374 | orchestrator | Saturday 11 April 2026 06:13:14 +0000 (0:00:01.139) 1:03:10.495 ******** 2026-04-11 06:13:39.321385 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321396 | orchestrator | 2026-04-11 06:13:39.321406 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:13:39.321435 | orchestrator | Saturday 11 April 2026 06:13:15 +0000 (0:00:01.259) 1:03:11.755 ******** 2026-04-11 06:13:39.321447 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321458 | orchestrator | 2026-04-11 06:13:39.321468 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:13:39.321484 | orchestrator | Saturday 11 April 2026 06:13:16 +0000 (0:00:01.140) 1:03:12.896 ******** 2026-04-11 06:13:39.321495 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321530 | orchestrator | 2026-04-11 06:13:39.321543 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:13:39.321554 | orchestrator | Saturday 11 April 2026 06:13:17 +0000 (0:00:01.134) 1:03:14.031 ******** 2026-04-11 06:13:39.321565 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321576 | orchestrator | 2026-04-11 06:13:39.321586 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:13:39.321597 | orchestrator | Saturday 11 April 2026 06:13:19 +0000 (0:00:01.185) 1:03:15.216 ******** 2026-04-11 06:13:39.321608 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:13:39.321620 | orchestrator | 2026-04-11 06:13:39.321630 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:13:39.321641 | orchestrator | Saturday 11 April 2026 06:13:20 +0000 (0:00:01.303) 1:03:16.519 ******** 2026-04-11 06:13:39.321658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:13:39.321677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:13:39.321695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:13:39.321710 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321725 | orchestrator | 2026-04-11 06:13:39.321742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:13:39.321758 | orchestrator | Saturday 11 April 2026 06:13:21 +0000 (0:00:01.484) 1:03:18.004 ******** 2026-04-11 06:13:39.321774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:13:39.321792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:13:39.321809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:13:39.321826 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321843 | orchestrator | 2026-04-11 06:13:39.321861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:13:39.321878 | orchestrator | Saturday 11 April 2026 06:13:23 +0000 (0:00:01.433) 1:03:19.437 ******** 2026-04-11 06:13:39.321897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-11 06:13:39.321916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-11 06:13:39.321934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-11 06:13:39.321952 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.321971 | orchestrator | 2026-04-11 06:13:39.321989 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:13:39.322103 | orchestrator | Saturday 11 April 2026 06:13:24 +0000 (0:00:01.458) 1:03:20.896 ******** 2026-04-11 06:13:39.322130 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:13:39.322147 | orchestrator | 2026-04-11 06:13:39.322158 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:13:39.322169 | orchestrator | Saturday 11 April 2026 06:13:25 +0000 (0:00:01.170) 1:03:22.067 ******** 2026-04-11 06:13:39.322180 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-11 06:13:39.322191 | orchestrator | 2026-04-11 06:13:39.322201 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 06:13:39.322212 | orchestrator | Saturday 11 April 2026 06:13:27 +0000 (0:00:01.354) 1:03:23.421 ******** 2026-04-11 06:13:39.322223 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:13:39.322234 | orchestrator | 2026-04-11 06:13:39.322245 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-11 06:13:39.322255 | orchestrator | Saturday 11 April 2026 06:13:29 +0000 (0:00:01.819) 1:03:25.240 ******** 2026-04-11 06:13:39.322266 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-04-11 06:13:39.322277 | orchestrator | 2026-04-11 06:13:39.322287 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 06:13:39.322298 | orchestrator | Saturday 11 April 2026 06:13:30 +0000 (0:00:01.772) 1:03:27.013 ******** 2026-04-11 06:13:39.322309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:13:39.322320 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 06:13:39.322331 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 06:13:39.322341 | orchestrator | 2026-04-11 06:13:39.322352 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 06:13:39.322363 | orchestrator | Saturday 11 April 2026 06:13:33 +0000 (0:00:03.153) 1:03:30.167 ******** 2026-04-11 06:13:39.322373 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-11 06:13:39.322384 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-11 06:13:39.322395 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:13:39.322406 | orchestrator | 2026-04-11 06:13:39.322417 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-11 06:13:39.322427 | orchestrator | Saturday 11 April 2026 06:13:35 +0000 (0:00:02.004) 1:03:32.172 ******** 2026-04-11 06:13:39.322438 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:13:39.322449 | orchestrator | 2026-04-11 06:13:39.322460 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-11 06:13:39.322470 | orchestrator | Saturday 11 April 2026 06:13:37 +0000 (0:00:01.178) 1:03:33.351 ******** 2026-04-11 06:13:39.322481 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-04-11 06:13:39.322492 | orchestrator | 2026-04-11 06:13:39.322503 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-11 06:13:39.322538 | orchestrator | Saturday 11 April 2026 06:13:38 +0000 (0:00:01.594) 1:03:34.945 ******** 2026-04-11 06:13:39.322564 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:14:54.628078 | orchestrator | 2026-04-11 06:14:54.628194 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-11 06:14:54.628226 | orchestrator | Saturday 11 April 2026 06:13:40 +0000 (0:00:01.655) 1:03:36.601 ******** 2026-04-11 06:14:54.628238 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:14:54.628251 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 06:14:54.628263 | orchestrator | 2026-04-11 06:14:54.628274 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 06:14:54.628285 | orchestrator | Saturday 11 April 2026 06:13:45 +0000 (0:00:05.101) 1:03:41.702 ******** 2026-04-11 06:14:54.628318 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:14:54.628330 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 06:14:54.628341 | orchestrator | 2026-04-11 06:14:54.628352 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 06:14:54.628363 | orchestrator | Saturday 11 April 2026 06:13:48 +0000 (0:00:03.014) 1:03:44.717 ******** 2026-04-11 06:14:54.628374 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-11 06:14:54.628385 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:14:54.628397 | orchestrator | 2026-04-11 06:14:54.628408 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-11 06:14:54.628418 | orchestrator | Saturday 11 April 2026 06:13:50 +0000 (0:00:02.042) 1:03:46.760 ******** 2026-04-11 06:14:54.628429 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-11 06:14:54.628440 | orchestrator | 2026-04-11 06:14:54.628451 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-11 06:14:54.628461 | orchestrator | Saturday 11 April 2026 06:13:52 +0000 (0:00:01.485) 1:03:48.245 ******** 2026-04-11 06:14:54.628472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628528 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:14:54.628539 | orchestrator | 2026-04-11 06:14:54.628549 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-11 06:14:54.628560 | orchestrator | Saturday 11 April 2026 06:13:53 +0000 (0:00:01.962) 1:03:50.208 ******** 2026-04-11 06:14:54.628571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:14:54.628632 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:14:54.628644 | orchestrator | 2026-04-11 06:14:54.628657 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-11 06:14:54.628670 | orchestrator | Saturday 11 April 2026 06:13:55 +0000 (0:00:01.631) 1:03:51.840 ******** 2026-04-11 06:14:54.628683 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:14:54.628721 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:14:54.628735 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:14:54.628755 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:14:54.628769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:14:54.628781 | orchestrator | 2026-04-11 06:14:54.628793 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-11 06:14:54.628825 | orchestrator | Saturday 11 April 2026 06:14:26 +0000 (0:00:31.019) 1:04:22.860 ******** 2026-04-11 06:14:54.628839 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:14:54.628851 | orchestrator | 2026-04-11 06:14:54.628869 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-11 06:14:54.628883 | orchestrator | Saturday 11 April 2026 06:14:27 +0000 (0:00:01.184) 1:04:24.044 ******** 2026-04-11 06:14:54.628896 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:14:54.628908 | orchestrator | 2026-04-11 06:14:54.628921 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-11 06:14:54.628933 | orchestrator | Saturday 11 April 2026 06:14:29 +0000 (0:00:01.281) 1:04:25.326 ******** 2026-04-11 06:14:54.628945 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-04-11 06:14:54.628958 | orchestrator | 2026-04-11 06:14:54.628970 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-11 06:14:54.628981 | orchestrator | Saturday 11 April 2026 06:14:30 +0000 (0:00:01.498) 1:04:26.824 ******** 2026-04-11 06:14:54.628991 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-04-11 06:14:54.629002 | orchestrator | 2026-04-11 06:14:54.629013 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-11 06:14:54.629023 | orchestrator | Saturday 11 April 2026 06:14:32 +0000 (0:00:01.489) 1:04:28.314 ******** 2026-04-11 06:14:54.629034 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:14:54.629045 | orchestrator | 2026-04-11 06:14:54.629056 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-11 06:14:54.629066 | orchestrator | Saturday 11 April 2026 06:14:34 +0000 (0:00:02.094) 1:04:30.409 ******** 2026-04-11 06:14:54.629077 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:14:54.629088 | orchestrator | 2026-04-11 06:14:54.629098 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-11 06:14:54.629109 | orchestrator | Saturday 11 April 2026 06:14:36 +0000 (0:00:02.035) 1:04:32.444 ******** 2026-04-11 06:14:54.629120 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:14:54.629131 | orchestrator | 2026-04-11 06:14:54.629142 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-11 06:14:54.629152 | orchestrator | Saturday 11 April 2026 06:14:38 +0000 (0:00:02.233) 1:04:34.678 ******** 2026-04-11 06:14:54.629163 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-11 06:14:54.629174 | orchestrator | 2026-04-11 06:14:54.629185 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-11 06:14:54.629195 | orchestrator | 2026-04-11 06:14:54.629206 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 06:14:54.629217 | orchestrator | Saturday 11 April 2026 06:14:41 +0000 (0:00:03.151) 1:04:37.829 ******** 2026-04-11 06:14:54.629228 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-11 06:14:54.629239 | orchestrator | 2026-04-11 06:14:54.629250 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 06:14:54.629260 | orchestrator | Saturday 11 April 2026 06:14:42 +0000 (0:00:01.143) 1:04:38.973 ******** 2026-04-11 06:14:54.629271 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:14:54.629282 | orchestrator | 2026-04-11 06:14:54.629317 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 06:14:54.629337 | orchestrator | Saturday 11 April 2026 06:14:44 +0000 (0:00:01.442) 1:04:40.416 ******** 2026-04-11 06:14:54.629348 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:14:54.629359 | orchestrator | 2026-04-11 06:14:54.629370 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:14:54.629380 | orchestrator | Saturday 11 April 2026 06:14:45 +0000 (0:00:01.128) 1:04:41.544 ******** 2026-04-11 06:14:54.629391 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:14:54.629402 | orchestrator | 2026-04-11 06:14:54.629412 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:14:54.629423 | orchestrator | Saturday 11 April 2026 06:14:46 +0000 (0:00:01.492) 1:04:43.037 ******** 2026-04-11 06:14:54.629433 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:14:54.629444 | orchestrator | 2026-04-11 06:14:54.629455 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 06:14:54.629465 | orchestrator | Saturday 11 April 2026 06:14:47 +0000 (0:00:01.136) 1:04:44.174 ******** 2026-04-11 06:14:54.629476 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:14:54.629487 | orchestrator | 2026-04-11 06:14:54.629497 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 06:14:54.629508 | orchestrator | Saturday 11 April 2026 06:14:49 +0000 (0:00:01.156) 1:04:45.330 ******** 2026-04-11 06:14:54.629519 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:14:54.629529 | orchestrator | 2026-04-11 06:14:54.629540 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 06:14:54.629551 | orchestrator | Saturday 11 April 2026 06:14:50 +0000 (0:00:01.134) 1:04:46.464 ******** 2026-04-11 06:14:54.629561 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:14:54.629572 | orchestrator | 2026-04-11 06:14:54.629583 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 06:14:54.629593 | orchestrator | Saturday 11 April 2026 06:14:51 +0000 (0:00:01.197) 1:04:47.662 ******** 2026-04-11 06:14:54.629604 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:14:54.629615 | orchestrator | 2026-04-11 06:14:54.629625 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 06:14:54.629636 | orchestrator | Saturday 11 April 2026 06:14:52 +0000 (0:00:01.120) 1:04:48.783 ******** 2026-04-11 06:14:54.629647 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:14:54.629657 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:14:54.629668 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:14:54.629678 | orchestrator | 2026-04-11 06:14:54.629705 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 06:14:54.629723 | orchestrator | Saturday 11 April 2026 06:14:54 +0000 (0:00:02.047) 1:04:50.831 ******** 2026-04-11 06:15:20.051003 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:20.051124 | orchestrator | 2026-04-11 06:15:20.051157 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 06:15:20.051171 | orchestrator | Saturday 11 April 2026 06:14:56 +0000 (0:00:01.758) 1:04:52.589 ******** 2026-04-11 06:15:20.051182 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:15:20.051195 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:15:20.051208 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:15:20.051226 | orchestrator | 2026-04-11 06:15:20.051244 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 06:15:20.051261 | orchestrator | Saturday 11 April 2026 06:14:59 +0000 (0:00:03.090) 1:04:55.680 ******** 2026-04-11 06:15:20.051280 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 06:15:20.051301 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 06:15:20.051320 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 06:15:20.051369 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.051387 | orchestrator | 2026-04-11 06:15:20.051405 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 06:15:20.051423 | orchestrator | Saturday 11 April 2026 06:15:00 +0000 (0:00:01.498) 1:04:57.179 ******** 2026-04-11 06:15:20.051443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 06:15:20.051465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 06:15:20.051483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 06:15:20.051502 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.051520 | orchestrator | 2026-04-11 06:15:20.051538 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 06:15:20.051560 | orchestrator | Saturday 11 April 2026 06:15:02 +0000 (0:00:01.732) 1:04:58.913 ******** 2026-04-11 06:15:20.051584 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:20.051608 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:20.051630 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:20.051652 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.051673 | orchestrator | 2026-04-11 06:15:20.051694 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 06:15:20.051716 | orchestrator | Saturday 11 April 2026 06:15:03 +0000 (0:00:01.253) 1:05:00.166 ******** 2026-04-11 06:15:20.051818 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 06:14:56.935127', 'end': '2026-04-11 06:14:56.983248', 'delta': '0:00:00.048121', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 06:15:20.051855 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 06:14:57.503991', 'end': '2026-04-11 06:14:57.553338', 'delta': '0:00:00.049347', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 06:15:20.051907 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 06:14:58.151763', 'end': '2026-04-11 06:14:58.198751', 'delta': '0:00:00.046988', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 06:15:20.051929 | orchestrator | 2026-04-11 06:15:20.051949 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 06:15:20.051970 | orchestrator | Saturday 11 April 2026 06:15:05 +0000 (0:00:01.326) 1:05:01.493 ******** 2026-04-11 06:15:20.051989 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:20.052010 | orchestrator | 2026-04-11 06:15:20.052030 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 06:15:20.052050 | orchestrator | Saturday 11 April 2026 06:15:06 +0000 (0:00:01.234) 1:05:02.728 ******** 2026-04-11 06:15:20.052071 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.052091 | orchestrator | 2026-04-11 06:15:20.052112 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 06:15:20.052130 | orchestrator | Saturday 11 April 2026 06:15:07 +0000 (0:00:01.220) 1:05:03.949 ******** 2026-04-11 06:15:20.052150 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:20.052171 | orchestrator | 2026-04-11 06:15:20.052192 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 06:15:20.052212 | orchestrator | Saturday 11 April 2026 06:15:08 +0000 (0:00:01.108) 1:05:05.058 ******** 2026-04-11 06:15:20.052232 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:15:20.052253 | orchestrator | 2026-04-11 06:15:20.052273 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:15:20.052293 | orchestrator | Saturday 11 April 2026 06:15:10 +0000 (0:00:01.918) 1:05:06.976 ******** 2026-04-11 06:15:20.052314 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:20.052333 | orchestrator | 2026-04-11 06:15:20.052352 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 06:15:20.052370 | orchestrator | Saturday 11 April 2026 06:15:11 +0000 (0:00:01.107) 1:05:08.083 ******** 2026-04-11 06:15:20.052389 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.052407 | orchestrator | 2026-04-11 06:15:20.052427 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 06:15:20.052443 | orchestrator | Saturday 11 April 2026 06:15:12 +0000 (0:00:01.098) 1:05:09.182 ******** 2026-04-11 06:15:20.052462 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.052479 | orchestrator | 2026-04-11 06:15:20.052497 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:15:20.052515 | orchestrator | Saturday 11 April 2026 06:15:14 +0000 (0:00:01.232) 1:05:10.414 ******** 2026-04-11 06:15:20.052535 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.052553 | orchestrator | 2026-04-11 06:15:20.052571 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 06:15:20.052604 | orchestrator | Saturday 11 April 2026 06:15:15 +0000 (0:00:01.213) 1:05:11.628 ******** 2026-04-11 06:15:20.052625 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.052645 | orchestrator | 2026-04-11 06:15:20.052664 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 06:15:20.052683 | orchestrator | Saturday 11 April 2026 06:15:16 +0000 (0:00:01.119) 1:05:12.747 ******** 2026-04-11 06:15:20.052699 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:20.052716 | orchestrator | 2026-04-11 06:15:20.052735 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 06:15:20.052801 | orchestrator | Saturday 11 April 2026 06:15:17 +0000 (0:00:01.203) 1:05:13.950 ******** 2026-04-11 06:15:20.052821 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:20.052839 | orchestrator | 2026-04-11 06:15:20.052857 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 06:15:20.052878 | orchestrator | Saturday 11 April 2026 06:15:18 +0000 (0:00:01.132) 1:05:15.083 ******** 2026-04-11 06:15:20.052895 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:20.052914 | orchestrator | 2026-04-11 06:15:20.052926 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 06:15:20.052954 | orchestrator | Saturday 11 April 2026 06:15:20 +0000 (0:00:01.168) 1:05:16.251 ******** 2026-04-11 06:15:22.580688 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:22.580864 | orchestrator | 2026-04-11 06:15:22.580882 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 06:15:22.580894 | orchestrator | Saturday 11 April 2026 06:15:21 +0000 (0:00:01.147) 1:05:17.399 ******** 2026-04-11 06:15:22.580905 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:22.580916 | orchestrator | 2026-04-11 06:15:22.580926 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 06:15:22.580936 | orchestrator | Saturday 11 April 2026 06:15:22 +0000 (0:00:01.156) 1:05:18.555 ******** 2026-04-11 06:15:22.580948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:15:22.580963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'uuids': ['9d724d10-77ae-4967-ad2d-00bd58cf4b58'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E']}})  2026-04-11 06:15:22.580977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7ad0a670', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:15:22.580990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855']}})  2026-04-11 06:15:22.581022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:15:22.581034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:15:22.581070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 06:15:22.581082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:15:22.581093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh', 'dm-uuid-CRYPT-LUKS2-f995fcc5d8e74f9b8df633437ec8101a-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:15:22.581103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:15:22.581113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'uuids': ['f995fcc5-d8e7-4f9b-8df6-33437ec8101a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh']}})  2026-04-11 06:15:22.581130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2']}})  2026-04-11 06:15:22.581141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:15:22.581169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '122e9594', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:15:23.918589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:15:23.918726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:15:23.918744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E', 'dm-uuid-CRYPT-LUKS2-9d724d1077ae4967ad2d00bd58cf4b58-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:15:23.918854 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:23.918870 | orchestrator | 2026-04-11 06:15:23.918882 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 06:15:23.918895 | orchestrator | Saturday 11 April 2026 06:15:23 +0000 (0:00:01.378) 1:05:19.934 ******** 2026-04-11 06:15:23.918907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:23.918936 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2', 'dm-uuid-LVM-1JO1XI6e6VuGVeVzDykcfKbBtikjhudLLEUIdm7ttGNsolk0UkQjcUO4narXEX2E'], 'uuids': ['9d724d10-77ae-4967-ad2d-00bd58cf4b58'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:23.918949 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac', 'scsi-SQEMU_QEMU_HARDDISK_7ad0a670-80b6-4125-8ef3-6216ce6e20ac'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7ad0a670', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:23.918982 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gs6fgb-1Wcf-xL0p-5nrc-t0Sp-iDOp-vEqK0z', 'scsi-0QEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb', 'scsi-SQEMU_QEMU_HARDDISK_f4a5e742-034f-4b0e-a516-1096b0558dbb'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:23.919007 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:23.919019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:23.919037 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-33-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:23.919049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:23.919068 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh', 'dm-uuid-CRYPT-LUKS2-f995fcc5d8e74f9b8df633437ec8101a-IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305556 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4afe3055--abd0--5615--b44c--a776d8127855-osd--block--4afe3055--abd0--5615--b44c--a776d8127855', 'dm-uuid-LVM-K7WW8kSs32CapDCsexGLtC6qsV1U5049IOnZa3AHrzxg1HkvDRqme1iBPNHDbFWh'], 'uuids': ['f995fcc5-d8e7-4f9b-8df6-33437ec8101a'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4a5e742', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['IOnZa3-AHrz-xg1H-kvDR-qme1-iBPN-HDbFWh']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305685 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-MaeyQs-lCkd-15by-ONeM-2vsv-Cp22-T0mgnh', 'scsi-0QEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f', 'scsi-SQEMU_QEMU_HARDDISK_01e94ece-63c1-4d76-b314-73e572c2946f'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '01e94ece', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1c2bdb62--89ba--5856--b2e0--5db351397ca2-osd--block--1c2bdb62--89ba--5856--b2e0--5db351397ca2']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305726 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '122e9594', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1', 'scsi-SQEMU_QEMU_HARDDISK_122e9594-abc5-4472-bfad-4cda336274d4-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305755 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305828 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E', 'dm-uuid-CRYPT-LUKS2-9d724d1077ae4967ad2d00bd58cf4b58-LEUIdm-7ttG-Nsol-k0Uk-QjcU-O4na-rXEX2E'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:15:29.305855 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:15:29.305865 | orchestrator | 2026-04-11 06:15:29.305874 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 06:15:29.305883 | orchestrator | Saturday 11 April 2026 06:15:25 +0000 (0:00:01.424) 1:05:21.358 ******** 2026-04-11 06:15:29.305898 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:29.305907 | orchestrator | 2026-04-11 06:15:29.305915 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 06:15:29.305923 | orchestrator | Saturday 11 April 2026 06:15:26 +0000 (0:00:01.499) 1:05:22.858 ******** 2026-04-11 06:15:29.305931 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:29.305939 | orchestrator | 2026-04-11 06:15:29.305946 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:15:29.305954 | orchestrator | Saturday 11 April 2026 06:15:27 +0000 (0:00:01.171) 1:05:24.030 ******** 2026-04-11 06:15:29.305962 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:15:29.305970 | orchestrator | 2026-04-11 06:15:29.305978 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:15:29.305992 | orchestrator | Saturday 11 April 2026 06:15:29 +0000 (0:00:01.479) 1:05:25.509 ******** 2026-04-11 06:16:12.051823 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.051939 | orchestrator | 2026-04-11 06:16:12.051951 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:16:12.051960 | orchestrator | Saturday 11 April 2026 06:15:30 +0000 (0:00:01.185) 1:05:26.695 ******** 2026-04-11 06:16:12.051968 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.051975 | orchestrator | 2026-04-11 06:16:12.051983 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:16:12.051991 | orchestrator | Saturday 11 April 2026 06:15:32 +0000 (0:00:01.744) 1:05:28.439 ******** 2026-04-11 06:16:12.051998 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052005 | orchestrator | 2026-04-11 06:16:12.052012 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 06:16:12.052020 | orchestrator | Saturday 11 April 2026 06:15:33 +0000 (0:00:01.195) 1:05:29.635 ******** 2026-04-11 06:16:12.052027 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-11 06:16:12.052035 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-11 06:16:12.052042 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-11 06:16:12.052049 | orchestrator | 2026-04-11 06:16:12.052057 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 06:16:12.052064 | orchestrator | Saturday 11 April 2026 06:15:35 +0000 (0:00:01.678) 1:05:31.314 ******** 2026-04-11 06:16:12.052071 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-11 06:16:12.052079 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-11 06:16:12.052086 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-11 06:16:12.052094 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052101 | orchestrator | 2026-04-11 06:16:12.052108 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 06:16:12.052115 | orchestrator | Saturday 11 April 2026 06:15:36 +0000 (0:00:01.217) 1:05:32.531 ******** 2026-04-11 06:16:12.052123 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-11 06:16:12.052131 | orchestrator | 2026-04-11 06:16:12.052139 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:16:12.052147 | orchestrator | Saturday 11 April 2026 06:15:37 +0000 (0:00:01.190) 1:05:33.722 ******** 2026-04-11 06:16:12.052155 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052162 | orchestrator | 2026-04-11 06:16:12.052169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:16:12.052177 | orchestrator | Saturday 11 April 2026 06:15:38 +0000 (0:00:01.141) 1:05:34.863 ******** 2026-04-11 06:16:12.052184 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052191 | orchestrator | 2026-04-11 06:16:12.052199 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:16:12.052206 | orchestrator | Saturday 11 April 2026 06:15:39 +0000 (0:00:01.157) 1:05:36.021 ******** 2026-04-11 06:16:12.052213 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052238 | orchestrator | 2026-04-11 06:16:12.052246 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:16:12.052264 | orchestrator | Saturday 11 April 2026 06:15:40 +0000 (0:00:01.168) 1:05:37.190 ******** 2026-04-11 06:16:12.052271 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:12.052279 | orchestrator | 2026-04-11 06:16:12.052286 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:16:12.052293 | orchestrator | Saturday 11 April 2026 06:15:42 +0000 (0:00:01.252) 1:05:38.442 ******** 2026-04-11 06:16:12.052300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:16:12.052307 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:16:12.052314 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:16:12.052321 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052329 | orchestrator | 2026-04-11 06:16:12.052336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:16:12.052343 | orchestrator | Saturday 11 April 2026 06:15:43 +0000 (0:00:01.471) 1:05:39.914 ******** 2026-04-11 06:16:12.052350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:16:12.052357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:16:12.052364 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:16:12.052371 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052379 | orchestrator | 2026-04-11 06:16:12.052388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:16:12.052396 | orchestrator | Saturday 11 April 2026 06:15:45 +0000 (0:00:01.788) 1:05:41.703 ******** 2026-04-11 06:16:12.052405 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:16:12.052413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:16:12.052421 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:16:12.052430 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052438 | orchestrator | 2026-04-11 06:16:12.052451 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:16:12.052464 | orchestrator | Saturday 11 April 2026 06:15:47 +0000 (0:00:01.853) 1:05:43.556 ******** 2026-04-11 06:16:12.052477 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:12.052488 | orchestrator | 2026-04-11 06:16:12.052500 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:16:12.052513 | orchestrator | Saturday 11 April 2026 06:15:48 +0000 (0:00:01.205) 1:05:44.762 ******** 2026-04-11 06:16:12.052526 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 06:16:12.052539 | orchestrator | 2026-04-11 06:16:12.052553 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 06:16:12.052567 | orchestrator | Saturday 11 April 2026 06:15:49 +0000 (0:00:01.328) 1:05:46.090 ******** 2026-04-11 06:16:12.052589 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:16:12.052598 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:16:12.052607 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:16:12.052615 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 06:16:12.052623 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-11 06:16:12.052632 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:16:12.052640 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:16:12.052649 | orchestrator | 2026-04-11 06:16:12.052658 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 06:16:12.052666 | orchestrator | Saturday 11 April 2026 06:15:51 +0000 (0:00:01.866) 1:05:47.957 ******** 2026-04-11 06:16:12.052675 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:16:12.052690 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:16:12.052698 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:16:12.052707 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 06:16:12.052715 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-11 06:16:12.052724 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-11 06:16:12.052732 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:16:12.052741 | orchestrator | 2026-04-11 06:16:12.052749 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-11 06:16:12.052758 | orchestrator | Saturday 11 April 2026 06:15:54 +0000 (0:00:02.269) 1:05:50.227 ******** 2026-04-11 06:16:12.052766 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:16:12.052773 | orchestrator | 2026-04-11 06:16:12.052780 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-11 06:16:12.052787 | orchestrator | Saturday 11 April 2026 06:15:55 +0000 (0:00:01.961) 1:05:52.188 ******** 2026-04-11 06:16:12.052794 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:16:12.052801 | orchestrator | 2026-04-11 06:16:12.052809 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-11 06:16:12.052816 | orchestrator | Saturday 11 April 2026 06:15:58 +0000 (0:00:02.592) 1:05:54.781 ******** 2026-04-11 06:16:12.052823 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:16:12.052830 | orchestrator | 2026-04-11 06:16:12.052837 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 06:16:12.052907 | orchestrator | Saturday 11 April 2026 06:16:00 +0000 (0:00:01.971) 1:05:56.752 ******** 2026-04-11 06:16:12.052918 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-11 06:16:12.052926 | orchestrator | 2026-04-11 06:16:12.052933 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 06:16:12.052940 | orchestrator | Saturday 11 April 2026 06:16:01 +0000 (0:00:01.169) 1:05:57.921 ******** 2026-04-11 06:16:12.052947 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-11 06:16:12.052955 | orchestrator | 2026-04-11 06:16:12.052962 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 06:16:12.052969 | orchestrator | Saturday 11 April 2026 06:16:02 +0000 (0:00:01.101) 1:05:59.023 ******** 2026-04-11 06:16:12.052976 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.052984 | orchestrator | 2026-04-11 06:16:12.052991 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 06:16:12.052999 | orchestrator | Saturday 11 April 2026 06:16:03 +0000 (0:00:01.112) 1:06:00.136 ******** 2026-04-11 06:16:12.053006 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:12.053013 | orchestrator | 2026-04-11 06:16:12.053020 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 06:16:12.053028 | orchestrator | Saturday 11 April 2026 06:16:05 +0000 (0:00:01.614) 1:06:01.750 ******** 2026-04-11 06:16:12.053035 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:12.053042 | orchestrator | 2026-04-11 06:16:12.053049 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 06:16:12.053056 | orchestrator | Saturday 11 April 2026 06:16:07 +0000 (0:00:01.512) 1:06:03.263 ******** 2026-04-11 06:16:12.053064 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:12.053071 | orchestrator | 2026-04-11 06:16:12.053078 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 06:16:12.053085 | orchestrator | Saturday 11 April 2026 06:16:08 +0000 (0:00:01.573) 1:06:04.836 ******** 2026-04-11 06:16:12.053098 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.053106 | orchestrator | 2026-04-11 06:16:12.053113 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 06:16:12.053120 | orchestrator | Saturday 11 April 2026 06:16:09 +0000 (0:00:01.154) 1:06:05.990 ******** 2026-04-11 06:16:12.053127 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.053135 | orchestrator | 2026-04-11 06:16:12.053142 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 06:16:12.053149 | orchestrator | Saturday 11 April 2026 06:16:10 +0000 (0:00:01.132) 1:06:07.123 ******** 2026-04-11 06:16:12.053156 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:12.053164 | orchestrator | 2026-04-11 06:16:12.053171 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 06:16:12.053184 | orchestrator | Saturday 11 April 2026 06:16:12 +0000 (0:00:01.130) 1:06:08.254 ******** 2026-04-11 06:16:51.551212 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.551314 | orchestrator | 2026-04-11 06:16:51.551326 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 06:16:51.551335 | orchestrator | Saturday 11 April 2026 06:16:13 +0000 (0:00:01.549) 1:06:09.804 ******** 2026-04-11 06:16:51.551342 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.551348 | orchestrator | 2026-04-11 06:16:51.551355 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 06:16:51.551362 | orchestrator | Saturday 11 April 2026 06:16:15 +0000 (0:00:01.584) 1:06:11.388 ******** 2026-04-11 06:16:51.551369 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551376 | orchestrator | 2026-04-11 06:16:51.551382 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 06:16:51.551388 | orchestrator | Saturday 11 April 2026 06:16:15 +0000 (0:00:00.793) 1:06:12.182 ******** 2026-04-11 06:16:51.551395 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551401 | orchestrator | 2026-04-11 06:16:51.551408 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 06:16:51.551414 | orchestrator | Saturday 11 April 2026 06:16:16 +0000 (0:00:00.780) 1:06:12.962 ******** 2026-04-11 06:16:51.551421 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.551427 | orchestrator | 2026-04-11 06:16:51.551433 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 06:16:51.551440 | orchestrator | Saturday 11 April 2026 06:16:17 +0000 (0:00:00.809) 1:06:13.771 ******** 2026-04-11 06:16:51.551446 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.551452 | orchestrator | 2026-04-11 06:16:51.551458 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 06:16:51.551464 | orchestrator | Saturday 11 April 2026 06:16:18 +0000 (0:00:00.817) 1:06:14.588 ******** 2026-04-11 06:16:51.551471 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.551477 | orchestrator | 2026-04-11 06:16:51.551483 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 06:16:51.551489 | orchestrator | Saturday 11 April 2026 06:16:19 +0000 (0:00:00.817) 1:06:15.406 ******** 2026-04-11 06:16:51.551496 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551502 | orchestrator | 2026-04-11 06:16:51.551509 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 06:16:51.551515 | orchestrator | Saturday 11 April 2026 06:16:19 +0000 (0:00:00.777) 1:06:16.183 ******** 2026-04-11 06:16:51.551521 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551528 | orchestrator | 2026-04-11 06:16:51.551534 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 06:16:51.551541 | orchestrator | Saturday 11 April 2026 06:16:20 +0000 (0:00:00.854) 1:06:17.038 ******** 2026-04-11 06:16:51.551547 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551553 | orchestrator | 2026-04-11 06:16:51.551560 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 06:16:51.551566 | orchestrator | Saturday 11 April 2026 06:16:21 +0000 (0:00:00.788) 1:06:17.826 ******** 2026-04-11 06:16:51.551593 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.551600 | orchestrator | 2026-04-11 06:16:51.551606 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 06:16:51.551624 | orchestrator | Saturday 11 April 2026 06:16:22 +0000 (0:00:00.863) 1:06:18.690 ******** 2026-04-11 06:16:51.551631 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.551637 | orchestrator | 2026-04-11 06:16:51.551643 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 06:16:51.551649 | orchestrator | Saturday 11 April 2026 06:16:23 +0000 (0:00:00.863) 1:06:19.553 ******** 2026-04-11 06:16:51.551655 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551661 | orchestrator | 2026-04-11 06:16:51.551668 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 06:16:51.551674 | orchestrator | Saturday 11 April 2026 06:16:24 +0000 (0:00:00.834) 1:06:20.388 ******** 2026-04-11 06:16:51.551681 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551687 | orchestrator | 2026-04-11 06:16:51.551693 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 06:16:51.551699 | orchestrator | Saturday 11 April 2026 06:16:24 +0000 (0:00:00.804) 1:06:21.192 ******** 2026-04-11 06:16:51.551705 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551712 | orchestrator | 2026-04-11 06:16:51.551718 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 06:16:51.551725 | orchestrator | Saturday 11 April 2026 06:16:25 +0000 (0:00:00.801) 1:06:21.993 ******** 2026-04-11 06:16:51.551731 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551737 | orchestrator | 2026-04-11 06:16:51.551743 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 06:16:51.551749 | orchestrator | Saturday 11 April 2026 06:16:26 +0000 (0:00:00.773) 1:06:22.767 ******** 2026-04-11 06:16:51.551756 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551762 | orchestrator | 2026-04-11 06:16:51.551769 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 06:16:51.551776 | orchestrator | Saturday 11 April 2026 06:16:27 +0000 (0:00:00.788) 1:06:23.555 ******** 2026-04-11 06:16:51.551782 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551789 | orchestrator | 2026-04-11 06:16:51.551795 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 06:16:51.551802 | orchestrator | Saturday 11 April 2026 06:16:28 +0000 (0:00:00.803) 1:06:24.359 ******** 2026-04-11 06:16:51.551808 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551814 | orchestrator | 2026-04-11 06:16:51.551820 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 06:16:51.551828 | orchestrator | Saturday 11 April 2026 06:16:28 +0000 (0:00:00.813) 1:06:25.173 ******** 2026-04-11 06:16:51.551834 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551840 | orchestrator | 2026-04-11 06:16:51.551847 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 06:16:51.551853 | orchestrator | Saturday 11 April 2026 06:16:29 +0000 (0:00:00.762) 1:06:25.935 ******** 2026-04-11 06:16:51.551859 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551865 | orchestrator | 2026-04-11 06:16:51.551885 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 06:16:51.551892 | orchestrator | Saturday 11 April 2026 06:16:30 +0000 (0:00:00.850) 1:06:26.786 ******** 2026-04-11 06:16:51.551899 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551905 | orchestrator | 2026-04-11 06:16:51.551911 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 06:16:51.551917 | orchestrator | Saturday 11 April 2026 06:16:31 +0000 (0:00:00.797) 1:06:27.583 ******** 2026-04-11 06:16:51.551941 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551947 | orchestrator | 2026-04-11 06:16:51.551953 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 06:16:51.551960 | orchestrator | Saturday 11 April 2026 06:16:32 +0000 (0:00:00.785) 1:06:28.369 ******** 2026-04-11 06:16:51.551974 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.551980 | orchestrator | 2026-04-11 06:16:51.551986 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 06:16:51.551992 | orchestrator | Saturday 11 April 2026 06:16:32 +0000 (0:00:00.786) 1:06:29.156 ******** 2026-04-11 06:16:51.551998 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.552004 | orchestrator | 2026-04-11 06:16:51.552009 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 06:16:51.552015 | orchestrator | Saturday 11 April 2026 06:16:34 +0000 (0:00:01.534) 1:06:30.691 ******** 2026-04-11 06:16:51.552021 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.552028 | orchestrator | 2026-04-11 06:16:51.552034 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 06:16:51.552040 | orchestrator | Saturday 11 April 2026 06:16:36 +0000 (0:00:01.869) 1:06:32.560 ******** 2026-04-11 06:16:51.552047 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-11 06:16:51.552055 | orchestrator | 2026-04-11 06:16:51.552062 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 06:16:51.552068 | orchestrator | Saturday 11 April 2026 06:16:37 +0000 (0:00:01.172) 1:06:33.733 ******** 2026-04-11 06:16:51.552074 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.552081 | orchestrator | 2026-04-11 06:16:51.552088 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 06:16:51.552095 | orchestrator | Saturday 11 April 2026 06:16:38 +0000 (0:00:01.201) 1:06:34.935 ******** 2026-04-11 06:16:51.552102 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.552108 | orchestrator | 2026-04-11 06:16:51.552115 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 06:16:51.552121 | orchestrator | Saturday 11 April 2026 06:16:39 +0000 (0:00:01.149) 1:06:36.084 ******** 2026-04-11 06:16:51.552128 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 06:16:51.552135 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 06:16:51.552142 | orchestrator | 2026-04-11 06:16:51.552148 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 06:16:51.552155 | orchestrator | Saturday 11 April 2026 06:16:41 +0000 (0:00:01.882) 1:06:37.967 ******** 2026-04-11 06:16:51.552161 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.552167 | orchestrator | 2026-04-11 06:16:51.552180 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 06:16:51.552186 | orchestrator | Saturday 11 April 2026 06:16:43 +0000 (0:00:01.455) 1:06:39.423 ******** 2026-04-11 06:16:51.552192 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.552198 | orchestrator | 2026-04-11 06:16:51.552204 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 06:16:51.552209 | orchestrator | Saturday 11 April 2026 06:16:44 +0000 (0:00:01.292) 1:06:40.715 ******** 2026-04-11 06:16:51.552215 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.552221 | orchestrator | 2026-04-11 06:16:51.552227 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 06:16:51.552233 | orchestrator | Saturday 11 April 2026 06:16:45 +0000 (0:00:00.794) 1:06:41.509 ******** 2026-04-11 06:16:51.552240 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.552246 | orchestrator | 2026-04-11 06:16:51.552253 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 06:16:51.552259 | orchestrator | Saturday 11 April 2026 06:16:46 +0000 (0:00:00.807) 1:06:42.317 ******** 2026-04-11 06:16:51.552265 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-11 06:16:51.552272 | orchestrator | 2026-04-11 06:16:51.552278 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 06:16:51.552284 | orchestrator | Saturday 11 April 2026 06:16:47 +0000 (0:00:01.170) 1:06:43.488 ******** 2026-04-11 06:16:51.552297 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:16:51.552303 | orchestrator | 2026-04-11 06:16:51.552310 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 06:16:51.552316 | orchestrator | Saturday 11 April 2026 06:16:48 +0000 (0:00:01.715) 1:06:45.204 ******** 2026-04-11 06:16:51.552322 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 06:16:51.552329 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 06:16:51.552335 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 06:16:51.552341 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.552346 | orchestrator | 2026-04-11 06:16:51.552353 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 06:16:51.552359 | orchestrator | Saturday 11 April 2026 06:16:50 +0000 (0:00:01.216) 1:06:46.421 ******** 2026-04-11 06:16:51.552366 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.552372 | orchestrator | 2026-04-11 06:16:51.552378 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 06:16:51.552384 | orchestrator | Saturday 11 April 2026 06:16:51 +0000 (0:00:01.142) 1:06:47.563 ******** 2026-04-11 06:16:51.552391 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:16:51.552397 | orchestrator | 2026-04-11 06:16:51.552409 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 06:17:35.036041 | orchestrator | Saturday 11 April 2026 06:16:52 +0000 (0:00:01.222) 1:06:48.786 ******** 2026-04-11 06:17:35.036153 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036167 | orchestrator | 2026-04-11 06:17:35.036179 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 06:17:35.036189 | orchestrator | Saturday 11 April 2026 06:16:53 +0000 (0:00:01.143) 1:06:49.929 ******** 2026-04-11 06:17:35.036199 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036209 | orchestrator | 2026-04-11 06:17:35.036219 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 06:17:35.036228 | orchestrator | Saturday 11 April 2026 06:16:54 +0000 (0:00:01.161) 1:06:51.090 ******** 2026-04-11 06:17:35.036238 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036247 | orchestrator | 2026-04-11 06:17:35.036257 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 06:17:35.036266 | orchestrator | Saturday 11 April 2026 06:16:55 +0000 (0:00:00.814) 1:06:51.905 ******** 2026-04-11 06:17:35.036276 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:17:35.036287 | orchestrator | 2026-04-11 06:17:35.036297 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 06:17:35.036307 | orchestrator | Saturday 11 April 2026 06:16:57 +0000 (0:00:02.128) 1:06:54.033 ******** 2026-04-11 06:17:35.036317 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:17:35.036327 | orchestrator | 2026-04-11 06:17:35.036336 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 06:17:35.036346 | orchestrator | Saturday 11 April 2026 06:16:58 +0000 (0:00:00.823) 1:06:54.857 ******** 2026-04-11 06:17:35.036356 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-11 06:17:35.036365 | orchestrator | 2026-04-11 06:17:35.036375 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 06:17:35.036384 | orchestrator | Saturday 11 April 2026 06:16:59 +0000 (0:00:01.344) 1:06:56.202 ******** 2026-04-11 06:17:35.036394 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036403 | orchestrator | 2026-04-11 06:17:35.036413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 06:17:35.036423 | orchestrator | Saturday 11 April 2026 06:17:01 +0000 (0:00:01.145) 1:06:57.347 ******** 2026-04-11 06:17:35.036432 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036442 | orchestrator | 2026-04-11 06:17:35.036451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 06:17:35.036484 | orchestrator | Saturday 11 April 2026 06:17:02 +0000 (0:00:01.141) 1:06:58.488 ******** 2026-04-11 06:17:35.036494 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036504 | orchestrator | 2026-04-11 06:17:35.036513 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 06:17:35.036523 | orchestrator | Saturday 11 April 2026 06:17:03 +0000 (0:00:01.211) 1:06:59.699 ******** 2026-04-11 06:17:35.036532 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036543 | orchestrator | 2026-04-11 06:17:35.036555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 06:17:35.036580 | orchestrator | Saturday 11 April 2026 06:17:04 +0000 (0:00:01.139) 1:07:00.839 ******** 2026-04-11 06:17:35.036591 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036602 | orchestrator | 2026-04-11 06:17:35.036617 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 06:17:35.036634 | orchestrator | Saturday 11 April 2026 06:17:05 +0000 (0:00:01.234) 1:07:02.073 ******** 2026-04-11 06:17:35.036652 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036668 | orchestrator | 2026-04-11 06:17:35.036684 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 06:17:35.036701 | orchestrator | Saturday 11 April 2026 06:17:07 +0000 (0:00:01.175) 1:07:03.249 ******** 2026-04-11 06:17:35.036717 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036732 | orchestrator | 2026-04-11 06:17:35.036749 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 06:17:35.036766 | orchestrator | Saturday 11 April 2026 06:17:08 +0000 (0:00:01.157) 1:07:04.407 ******** 2026-04-11 06:17:35.036783 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.036800 | orchestrator | 2026-04-11 06:17:35.036813 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 06:17:35.036823 | orchestrator | Saturday 11 April 2026 06:17:09 +0000 (0:00:01.144) 1:07:05.551 ******** 2026-04-11 06:17:35.036835 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:17:35.036846 | orchestrator | 2026-04-11 06:17:35.036857 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 06:17:35.036868 | orchestrator | Saturday 11 April 2026 06:17:10 +0000 (0:00:00.797) 1:07:06.349 ******** 2026-04-11 06:17:35.036880 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-11 06:17:35.036892 | orchestrator | 2026-04-11 06:17:35.036903 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 06:17:35.036913 | orchestrator | Saturday 11 April 2026 06:17:11 +0000 (0:00:01.256) 1:07:07.606 ******** 2026-04-11 06:17:35.036922 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-11 06:17:35.036932 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-11 06:17:35.036942 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-11 06:17:35.036951 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-11 06:17:35.036961 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-11 06:17:35.036970 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-11 06:17:35.036980 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-11 06:17:35.036989 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-11 06:17:35.037049 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 06:17:35.037061 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 06:17:35.037071 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 06:17:35.037099 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 06:17:35.037109 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 06:17:35.037119 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 06:17:35.037128 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-11 06:17:35.037150 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-11 06:17:35.037160 | orchestrator | 2026-04-11 06:17:35.037170 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 06:17:35.037179 | orchestrator | Saturday 11 April 2026 06:17:17 +0000 (0:00:06.230) 1:07:13.836 ******** 2026-04-11 06:17:35.037189 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-11 06:17:35.037199 | orchestrator | 2026-04-11 06:17:35.037208 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 06:17:35.037217 | orchestrator | Saturday 11 April 2026 06:17:18 +0000 (0:00:01.103) 1:07:14.940 ******** 2026-04-11 06:17:35.037227 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:17:35.037238 | orchestrator | 2026-04-11 06:17:35.037248 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 06:17:35.037257 | orchestrator | Saturday 11 April 2026 06:17:20 +0000 (0:00:01.502) 1:07:16.442 ******** 2026-04-11 06:17:35.037267 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:17:35.037276 | orchestrator | 2026-04-11 06:17:35.037286 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 06:17:35.037295 | orchestrator | Saturday 11 April 2026 06:17:21 +0000 (0:00:01.618) 1:07:18.061 ******** 2026-04-11 06:17:35.037305 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037314 | orchestrator | 2026-04-11 06:17:35.037323 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 06:17:35.037333 | orchestrator | Saturday 11 April 2026 06:17:22 +0000 (0:00:00.774) 1:07:18.835 ******** 2026-04-11 06:17:35.037343 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037352 | orchestrator | 2026-04-11 06:17:35.037362 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 06:17:35.037371 | orchestrator | Saturday 11 April 2026 06:17:23 +0000 (0:00:00.780) 1:07:19.616 ******** 2026-04-11 06:17:35.037381 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037390 | orchestrator | 2026-04-11 06:17:35.037400 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 06:17:35.037409 | orchestrator | Saturday 11 April 2026 06:17:24 +0000 (0:00:00.786) 1:07:20.402 ******** 2026-04-11 06:17:35.037419 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037428 | orchestrator | 2026-04-11 06:17:35.037438 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 06:17:35.037454 | orchestrator | Saturday 11 April 2026 06:17:24 +0000 (0:00:00.789) 1:07:21.192 ******** 2026-04-11 06:17:35.037464 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037473 | orchestrator | 2026-04-11 06:17:35.037483 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 06:17:35.037493 | orchestrator | Saturday 11 April 2026 06:17:25 +0000 (0:00:00.824) 1:07:22.017 ******** 2026-04-11 06:17:35.037502 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037511 | orchestrator | 2026-04-11 06:17:35.037521 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 06:17:35.037531 | orchestrator | Saturday 11 April 2026 06:17:26 +0000 (0:00:00.773) 1:07:22.791 ******** 2026-04-11 06:17:35.037540 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037550 | orchestrator | 2026-04-11 06:17:35.037559 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 06:17:35.037569 | orchestrator | Saturday 11 April 2026 06:17:27 +0000 (0:00:00.824) 1:07:23.615 ******** 2026-04-11 06:17:35.037578 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037588 | orchestrator | 2026-04-11 06:17:35.037598 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 06:17:35.037614 | orchestrator | Saturday 11 April 2026 06:17:28 +0000 (0:00:00.835) 1:07:24.450 ******** 2026-04-11 06:17:35.037623 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037633 | orchestrator | 2026-04-11 06:17:35.037642 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 06:17:35.037652 | orchestrator | Saturday 11 April 2026 06:17:29 +0000 (0:00:00.772) 1:07:25.223 ******** 2026-04-11 06:17:35.037661 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037671 | orchestrator | 2026-04-11 06:17:35.037680 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 06:17:35.037690 | orchestrator | Saturday 11 April 2026 06:17:29 +0000 (0:00:00.804) 1:07:26.028 ******** 2026-04-11 06:17:35.037699 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:17:35.037709 | orchestrator | 2026-04-11 06:17:35.037718 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 06:17:35.037728 | orchestrator | Saturday 11 April 2026 06:17:30 +0000 (0:00:00.856) 1:07:26.884 ******** 2026-04-11 06:17:35.037737 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-11 06:17:35.037747 | orchestrator | 2026-04-11 06:17:35.037756 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 06:17:35.037766 | orchestrator | Saturday 11 April 2026 06:17:34 +0000 (0:00:04.167) 1:07:31.052 ******** 2026-04-11 06:17:35.037775 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:17:35.037785 | orchestrator | 2026-04-11 06:17:35.037801 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 06:18:15.398260 | orchestrator | Saturday 11 April 2026 06:17:35 +0000 (0:00:00.875) 1:07:31.928 ******** 2026-04-11 06:18:15.398354 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-11 06:18:15.398367 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-11 06:18:15.398375 | orchestrator | 2026-04-11 06:18:15.398382 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 06:18:15.398388 | orchestrator | Saturday 11 April 2026 06:17:40 +0000 (0:00:04.442) 1:07:36.370 ******** 2026-04-11 06:18:15.398395 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398401 | orchestrator | 2026-04-11 06:18:15.398407 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 06:18:15.398413 | orchestrator | Saturday 11 April 2026 06:17:40 +0000 (0:00:00.772) 1:07:37.143 ******** 2026-04-11 06:18:15.398419 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398424 | orchestrator | 2026-04-11 06:18:15.398431 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:18:15.398438 | orchestrator | Saturday 11 April 2026 06:17:41 +0000 (0:00:00.802) 1:07:37.945 ******** 2026-04-11 06:18:15.398444 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398450 | orchestrator | 2026-04-11 06:18:15.398456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:18:15.398461 | orchestrator | Saturday 11 April 2026 06:17:42 +0000 (0:00:00.808) 1:07:38.754 ******** 2026-04-11 06:18:15.398467 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398473 | orchestrator | 2026-04-11 06:18:15.398479 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:18:15.398484 | orchestrator | Saturday 11 April 2026 06:17:43 +0000 (0:00:00.853) 1:07:39.608 ******** 2026-04-11 06:18:15.398507 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398514 | orchestrator | 2026-04-11 06:18:15.398519 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:18:15.398525 | orchestrator | Saturday 11 April 2026 06:17:44 +0000 (0:00:00.798) 1:07:40.406 ******** 2026-04-11 06:18:15.398531 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:18:15.398538 | orchestrator | 2026-04-11 06:18:15.398544 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:18:15.398560 | orchestrator | Saturday 11 April 2026 06:17:45 +0000 (0:00:00.911) 1:07:41.317 ******** 2026-04-11 06:18:15.398566 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:18:15.398572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:18:15.398578 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:18:15.398584 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398589 | orchestrator | 2026-04-11 06:18:15.398595 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:18:15.398601 | orchestrator | Saturday 11 April 2026 06:17:46 +0000 (0:00:01.600) 1:07:42.918 ******** 2026-04-11 06:18:15.398607 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:18:15.398613 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:18:15.398618 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:18:15.398624 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398630 | orchestrator | 2026-04-11 06:18:15.398635 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:18:15.398641 | orchestrator | Saturday 11 April 2026 06:17:47 +0000 (0:00:01.079) 1:07:43.998 ******** 2026-04-11 06:18:15.398647 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-11 06:18:15.398652 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-11 06:18:15.398658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-11 06:18:15.398664 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398670 | orchestrator | 2026-04-11 06:18:15.398675 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:18:15.398681 | orchestrator | Saturday 11 April 2026 06:17:48 +0000 (0:00:01.084) 1:07:45.083 ******** 2026-04-11 06:18:15.398687 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:18:15.398693 | orchestrator | 2026-04-11 06:18:15.398698 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:18:15.398704 | orchestrator | Saturday 11 April 2026 06:17:49 +0000 (0:00:00.820) 1:07:45.903 ******** 2026-04-11 06:18:15.398710 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-11 06:18:15.398716 | orchestrator | 2026-04-11 06:18:15.398721 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 06:18:15.398727 | orchestrator | Saturday 11 April 2026 06:17:50 +0000 (0:00:01.160) 1:07:47.064 ******** 2026-04-11 06:18:15.398733 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:18:15.398738 | orchestrator | 2026-04-11 06:18:15.398744 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-11 06:18:15.398750 | orchestrator | Saturday 11 April 2026 06:17:52 +0000 (0:00:01.439) 1:07:48.504 ******** 2026-04-11 06:18:15.398756 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-04-11 06:18:15.398761 | orchestrator | 2026-04-11 06:18:15.398780 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 06:18:15.398786 | orchestrator | Saturday 11 April 2026 06:17:53 +0000 (0:00:01.140) 1:07:49.644 ******** 2026-04-11 06:18:15.398792 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:18:15.398798 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 06:18:15.398804 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 06:18:15.398815 | orchestrator | 2026-04-11 06:18:15.398822 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 06:18:15.398829 | orchestrator | Saturday 11 April 2026 06:17:56 +0000 (0:00:03.184) 1:07:52.828 ******** 2026-04-11 06:18:15.398835 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-11 06:18:15.398842 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-11 06:18:15.398849 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:18:15.398856 | orchestrator | 2026-04-11 06:18:15.398863 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-11 06:18:15.398870 | orchestrator | Saturday 11 April 2026 06:17:58 +0000 (0:00:01.982) 1:07:54.811 ******** 2026-04-11 06:18:15.398877 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.398883 | orchestrator | 2026-04-11 06:18:15.398890 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-11 06:18:15.398896 | orchestrator | Saturday 11 April 2026 06:17:59 +0000 (0:00:00.782) 1:07:55.594 ******** 2026-04-11 06:18:15.398903 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-04-11 06:18:15.398910 | orchestrator | 2026-04-11 06:18:15.398916 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-11 06:18:15.398923 | orchestrator | Saturday 11 April 2026 06:18:00 +0000 (0:00:01.321) 1:07:56.915 ******** 2026-04-11 06:18:15.398930 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:18:15.398937 | orchestrator | 2026-04-11 06:18:15.398944 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-11 06:18:15.398950 | orchestrator | Saturday 11 April 2026 06:18:02 +0000 (0:00:01.642) 1:07:58.557 ******** 2026-04-11 06:18:15.398957 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:18:15.398964 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 06:18:15.398970 | orchestrator | 2026-04-11 06:18:15.398977 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 06:18:15.398983 | orchestrator | Saturday 11 April 2026 06:18:07 +0000 (0:00:05.088) 1:08:03.646 ******** 2026-04-11 06:18:15.398990 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:18:15.398996 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 06:18:15.399003 | orchestrator | 2026-04-11 06:18:15.399013 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 06:18:15.399019 | orchestrator | Saturday 11 April 2026 06:18:10 +0000 (0:00:03.016) 1:08:06.662 ******** 2026-04-11 06:18:15.399026 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-11 06:18:15.399033 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:18:15.399039 | orchestrator | 2026-04-11 06:18:15.399046 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-11 06:18:15.399052 | orchestrator | Saturday 11 April 2026 06:18:12 +0000 (0:00:01.670) 1:08:08.333 ******** 2026-04-11 06:18:15.399059 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-04-11 06:18:15.399090 | orchestrator | 2026-04-11 06:18:15.399099 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-11 06:18:15.399115 | orchestrator | Saturday 11 April 2026 06:18:13 +0000 (0:00:01.165) 1:08:09.499 ******** 2026-04-11 06:18:15.399126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:18:15.399135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:18:15.399145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:18:15.399164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:18:15.399174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:18:15.399183 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:18:15.399193 | orchestrator | 2026-04-11 06:18:15.399202 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-11 06:18:15.399212 | orchestrator | Saturday 11 April 2026 06:18:14 +0000 (0:00:01.639) 1:08:11.138 ******** 2026-04-11 06:18:15.399221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:18:15.399231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:18:15.399241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:18:15.399258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:19:21.519987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:19:21.520109 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:19:21.520127 | orchestrator | 2026-04-11 06:19:21.520140 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-11 06:19:21.520153 | orchestrator | Saturday 11 April 2026 06:18:16 +0000 (0:00:01.629) 1:08:12.768 ******** 2026-04-11 06:19:21.520344 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:19:21.520360 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:19:21.520371 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:19:21.520382 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:19:21.520395 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:19:21.520406 | orchestrator | 2026-04-11 06:19:21.520417 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-11 06:19:21.520428 | orchestrator | Saturday 11 April 2026 06:18:47 +0000 (0:00:30.701) 1:08:43.469 ******** 2026-04-11 06:19:21.520439 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:19:21.520450 | orchestrator | 2026-04-11 06:19:21.520461 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-11 06:19:21.520472 | orchestrator | Saturday 11 April 2026 06:18:48 +0000 (0:00:00.788) 1:08:44.257 ******** 2026-04-11 06:19:21.520483 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:19:21.520494 | orchestrator | 2026-04-11 06:19:21.520505 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-11 06:19:21.520516 | orchestrator | Saturday 11 April 2026 06:18:48 +0000 (0:00:00.739) 1:08:44.997 ******** 2026-04-11 06:19:21.520529 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-04-11 06:19:21.520543 | orchestrator | 2026-04-11 06:19:21.520556 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-11 06:19:21.520570 | orchestrator | Saturday 11 April 2026 06:18:50 +0000 (0:00:01.343) 1:08:46.341 ******** 2026-04-11 06:19:21.520599 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-04-11 06:19:21.520642 | orchestrator | 2026-04-11 06:19:21.520749 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-11 06:19:21.520777 | orchestrator | Saturday 11 April 2026 06:18:51 +0000 (0:00:01.154) 1:08:47.495 ******** 2026-04-11 06:19:21.520799 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:19:21.520820 | orchestrator | 2026-04-11 06:19:21.520837 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-11 06:19:21.520857 | orchestrator | Saturday 11 April 2026 06:18:53 +0000 (0:00:02.076) 1:08:49.571 ******** 2026-04-11 06:19:21.520877 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:19:21.520896 | orchestrator | 2026-04-11 06:19:21.520914 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-11 06:19:21.520932 | orchestrator | Saturday 11 April 2026 06:18:55 +0000 (0:00:02.028) 1:08:51.600 ******** 2026-04-11 06:19:21.520944 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:19:21.520954 | orchestrator | 2026-04-11 06:19:21.520965 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-11 06:19:21.520976 | orchestrator | Saturday 11 April 2026 06:18:57 +0000 (0:00:02.217) 1:08:53.818 ******** 2026-04-11 06:19:21.520987 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-11 06:19:21.520998 | orchestrator | 2026-04-11 06:19:21.521011 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-11 06:19:21.521029 | orchestrator | 2026-04-11 06:19:21.521048 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 06:19:21.521066 | orchestrator | Saturday 11 April 2026 06:19:00 +0000 (0:00:02.970) 1:08:56.789 ******** 2026-04-11 06:19:21.521083 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-11 06:19:21.521094 | orchestrator | 2026-04-11 06:19:21.521105 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-11 06:19:21.521124 | orchestrator | Saturday 11 April 2026 06:19:01 +0000 (0:00:01.135) 1:08:57.924 ******** 2026-04-11 06:19:21.521142 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:21.521183 | orchestrator | 2026-04-11 06:19:21.521204 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-11 06:19:21.521222 | orchestrator | Saturday 11 April 2026 06:19:03 +0000 (0:00:01.424) 1:08:59.348 ******** 2026-04-11 06:19:21.521239 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:21.521254 | orchestrator | 2026-04-11 06:19:21.521265 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:19:21.521276 | orchestrator | Saturday 11 April 2026 06:19:04 +0000 (0:00:01.160) 1:09:00.509 ******** 2026-04-11 06:19:21.521287 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:21.521297 | orchestrator | 2026-04-11 06:19:21.521308 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:19:21.521319 | orchestrator | Saturday 11 April 2026 06:19:05 +0000 (0:00:01.441) 1:09:01.950 ******** 2026-04-11 06:19:21.521329 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:21.521340 | orchestrator | 2026-04-11 06:19:21.521372 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-11 06:19:21.521384 | orchestrator | Saturday 11 April 2026 06:19:06 +0000 (0:00:01.246) 1:09:03.197 ******** 2026-04-11 06:19:21.521395 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:21.521405 | orchestrator | 2026-04-11 06:19:21.521416 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-11 06:19:21.521427 | orchestrator | Saturday 11 April 2026 06:19:08 +0000 (0:00:01.181) 1:09:04.378 ******** 2026-04-11 06:19:21.521437 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:21.521448 | orchestrator | 2026-04-11 06:19:21.521459 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-11 06:19:21.521470 | orchestrator | Saturday 11 April 2026 06:19:09 +0000 (0:00:01.174) 1:09:05.553 ******** 2026-04-11 06:19:21.521481 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:21.521492 | orchestrator | 2026-04-11 06:19:21.521515 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-11 06:19:21.521526 | orchestrator | Saturday 11 April 2026 06:19:10 +0000 (0:00:01.168) 1:09:06.722 ******** 2026-04-11 06:19:21.521537 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:21.521547 | orchestrator | 2026-04-11 06:19:21.521559 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-11 06:19:21.521569 | orchestrator | Saturday 11 April 2026 06:19:11 +0000 (0:00:01.100) 1:09:07.822 ******** 2026-04-11 06:19:21.521580 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:19:21.521591 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:19:21.521602 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:19:21.521612 | orchestrator | 2026-04-11 06:19:21.521623 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-11 06:19:21.521634 | orchestrator | Saturday 11 April 2026 06:19:13 +0000 (0:00:01.754) 1:09:09.577 ******** 2026-04-11 06:19:21.521645 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:21.521656 | orchestrator | 2026-04-11 06:19:21.521666 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-11 06:19:21.521677 | orchestrator | Saturday 11 April 2026 06:19:14 +0000 (0:00:01.302) 1:09:10.880 ******** 2026-04-11 06:19:21.521688 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:19:21.521698 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:19:21.521709 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:19:21.521720 | orchestrator | 2026-04-11 06:19:21.521730 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-11 06:19:21.521741 | orchestrator | Saturday 11 April 2026 06:19:17 +0000 (0:00:03.231) 1:09:14.111 ******** 2026-04-11 06:19:21.521760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 06:19:21.521772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 06:19:21.521782 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 06:19:21.521793 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:21.521805 | orchestrator | 2026-04-11 06:19:21.521823 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-11 06:19:21.521842 | orchestrator | Saturday 11 April 2026 06:19:19 +0000 (0:00:01.436) 1:09:15.547 ******** 2026-04-11 06:19:21.521863 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-11 06:19:21.521886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-11 06:19:21.521905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-11 06:19:21.521925 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:21.521943 | orchestrator | 2026-04-11 06:19:21.521962 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-11 06:19:21.521981 | orchestrator | Saturday 11 April 2026 06:19:21 +0000 (0:00:02.100) 1:09:17.648 ******** 2026-04-11 06:19:21.521998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:21.522115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:43.109092 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:43.109299 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:43.109322 | orchestrator | 2026-04-11 06:19:43.109335 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-11 06:19:43.109347 | orchestrator | Saturday 11 April 2026 06:19:22 +0000 (0:00:01.166) 1:09:18.814 ******** 2026-04-11 06:19:43.109362 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd4d463bff890', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-11 06:19:15.199381', 'end': '2026-04-11 06:19:15.245842', 'delta': '0:00:00.046461', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d463bff890'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-11 06:19:43.109377 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '26fb3b048944', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-11 06:19:15.756089', 'end': '2026-04-11 06:19:15.803777', 'delta': '0:00:00.047688', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26fb3b048944'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-11 06:19:43.109389 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '5c0324173fbf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-11 06:19:16.686436', 'end': '2026-04-11 06:19:16.756152', 'delta': '0:00:00.069716', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5c0324173fbf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-11 06:19:43.109400 | orchestrator | 2026-04-11 06:19:43.109412 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-11 06:19:43.109423 | orchestrator | Saturday 11 April 2026 06:19:23 +0000 (0:00:01.214) 1:09:20.029 ******** 2026-04-11 06:19:43.109472 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:43.109486 | orchestrator | 2026-04-11 06:19:43.109497 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-11 06:19:43.109534 | orchestrator | Saturday 11 April 2026 06:19:25 +0000 (0:00:01.797) 1:09:21.826 ******** 2026-04-11 06:19:43.109545 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:43.109556 | orchestrator | 2026-04-11 06:19:43.109567 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-11 06:19:43.109579 | orchestrator | Saturday 11 April 2026 06:19:26 +0000 (0:00:01.293) 1:09:23.120 ******** 2026-04-11 06:19:43.109590 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:43.109600 | orchestrator | 2026-04-11 06:19:43.109611 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-11 06:19:43.109622 | orchestrator | Saturday 11 April 2026 06:19:28 +0000 (0:00:01.135) 1:09:24.255 ******** 2026-04-11 06:19:43.109632 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:19:43.109643 | orchestrator | 2026-04-11 06:19:43.109654 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:19:43.109664 | orchestrator | Saturday 11 April 2026 06:19:31 +0000 (0:00:03.009) 1:09:27.264 ******** 2026-04-11 06:19:43.109675 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:43.109686 | orchestrator | 2026-04-11 06:19:43.109696 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-11 06:19:43.109707 | orchestrator | Saturday 11 April 2026 06:19:32 +0000 (0:00:01.192) 1:09:28.457 ******** 2026-04-11 06:19:43.109737 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:43.109749 | orchestrator | 2026-04-11 06:19:43.109760 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-11 06:19:43.109770 | orchestrator | Saturday 11 April 2026 06:19:33 +0000 (0:00:01.173) 1:09:29.631 ******** 2026-04-11 06:19:43.109781 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:43.109792 | orchestrator | 2026-04-11 06:19:43.109803 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-11 06:19:43.109814 | orchestrator | Saturday 11 April 2026 06:19:34 +0000 (0:00:01.275) 1:09:30.906 ******** 2026-04-11 06:19:43.109825 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:43.109836 | orchestrator | 2026-04-11 06:19:43.109847 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-11 06:19:43.109857 | orchestrator | Saturday 11 April 2026 06:19:35 +0000 (0:00:01.180) 1:09:32.086 ******** 2026-04-11 06:19:43.109868 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:43.109879 | orchestrator | 2026-04-11 06:19:43.109890 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-11 06:19:43.109900 | orchestrator | Saturday 11 April 2026 06:19:37 +0000 (0:00:01.189) 1:09:33.276 ******** 2026-04-11 06:19:43.109911 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:43.109922 | orchestrator | 2026-04-11 06:19:43.109933 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-11 06:19:43.109944 | orchestrator | Saturday 11 April 2026 06:19:38 +0000 (0:00:01.224) 1:09:34.501 ******** 2026-04-11 06:19:43.109955 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:43.109966 | orchestrator | 2026-04-11 06:19:43.109977 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-11 06:19:43.109988 | orchestrator | Saturday 11 April 2026 06:19:39 +0000 (0:00:01.146) 1:09:35.648 ******** 2026-04-11 06:19:43.109999 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:43.110010 | orchestrator | 2026-04-11 06:19:43.110134 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-11 06:19:43.110147 | orchestrator | Saturday 11 April 2026 06:19:40 +0000 (0:00:01.165) 1:09:36.814 ******** 2026-04-11 06:19:43.110158 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:43.110169 | orchestrator | 2026-04-11 06:19:43.110180 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-11 06:19:43.110223 | orchestrator | Saturday 11 April 2026 06:19:41 +0000 (0:00:01.093) 1:09:37.907 ******** 2026-04-11 06:19:43.110242 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:43.110261 | orchestrator | 2026-04-11 06:19:43.110294 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-11 06:19:43.110313 | orchestrator | Saturday 11 April 2026 06:19:42 +0000 (0:00:01.272) 1:09:39.180 ******** 2026-04-11 06:19:43.110341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:19:43.110392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'uuids': ['9614ebde-9763-41b8-8070-f8f6acc1ef2b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn']}})  2026-04-11 06:19:43.110406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '17a8d280', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:19:43.110431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412']}})  2026-04-11 06:19:43.225756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:19:43.225854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:19:43.225870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-11 06:19:43.225920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:19:43.225933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ', 'dm-uuid-CRYPT-LUKS2-bdcb2384073e4d9c84ce45a3274a4645-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:19:43.225945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:19:43.225957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'uuids': ['bdcb2384-073e-4d9c-84ce-45a3274a4645'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ']}})  2026-04-11 06:19:43.225986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056']}})  2026-04-11 06:19:43.225998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:19:43.226071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a75c226', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-11 06:19:43.226099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:19:43.226111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-11 06:19:43.226131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn', 'dm-uuid-CRYPT-LUKS2-9614ebde976341b88070f8f6acc1ef2b-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-11 06:19:44.590727 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:44.590852 | orchestrator | 2026-04-11 06:19:44.590876 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-11 06:19:44.590898 | orchestrator | Saturday 11 April 2026 06:19:44 +0000 (0:00:01.357) 1:09:40.537 ******** 2026-04-11 06:19:44.590921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591033 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056', 'dm-uuid-LVM-6h6BzLnxVSITPOCXsTMPdEdwYnxpyl6jcENBjNwdWV4iIXI6HpUJIGXCmHnbKWOn'], 'uuids': ['9614ebde-9763-41b8-8070-f8f6acc1ef2b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735', 'scsi-SQEMU_QEMU_HARDDISK_17a8d280-644e-4721-8a5f-cc5da3df4735'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '17a8d280', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591186 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Gv5rB0-5v31-5ChI-IvnR-CmdW-Foh5-mihe2a', 'scsi-0QEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3', 'scsi-SQEMU_QEMU_HARDDISK_56bfdd1e-1096-4320-af10-78d4715d0af3'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591303 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-11-01-39-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ', 'dm-uuid-CRYPT-LUKS2-bdcb2384073e4d9c84ce45a3274a4645-VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:44.591409 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e8a3f20d--ed3f--5f34--b319--d0862efd8412-osd--block--e8a3f20d--ed3f--5f34--b319--d0862efd8412', 'dm-uuid-LVM-VdQ7qTAVdW9b0W0u4soeoyYCMykAdMqIVywyC0poxaFsTavehHwqykfd0GhP5gkQ'], 'uuids': ['bdcb2384-073e-4d9c-84ce-45a3274a4645'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '56bfdd1e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VywyC0-poxa-FsTa-vehH-wqyk-fd0G-hP5gkQ']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:57.070508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JtVcog-BSy1-h8Zb-tm9w-DiRX-1Dbq-bS56zI', 'scsi-0QEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78', 'scsi-SQEMU_QEMU_HARDDISK_1f351e21-4e71-4ad4-9e94-6bc6cac8fc78'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f351e21', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a718c651--a264--5d59--a3a1--3dddb23bb056-osd--block--a718c651--a264--5d59--a3a1--3dddb23bb056']}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:57.070647 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:57.070668 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a75c226', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a75c226-2d22-4742-843b-bdb54b765e20-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:57.070703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:57.070738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:57.070757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn', 'dm-uuid-CRYPT-LUKS2-9614ebde976341b88070f8f6acc1ef2b-cENBjN-wdWV-4iIX-I6Hp-UJIG-XCmH-nbKWOn'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-11 06:19:57.070771 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:57.070785 | orchestrator | 2026-04-11 06:19:57.070798 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-11 06:19:57.070810 | orchestrator | Saturday 11 April 2026 06:19:45 +0000 (0:00:01.502) 1:09:42.040 ******** 2026-04-11 06:19:57.070822 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:57.070835 | orchestrator | 2026-04-11 06:19:57.070847 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-11 06:19:57.070859 | orchestrator | Saturday 11 April 2026 06:19:47 +0000 (0:00:01.529) 1:09:43.569 ******** 2026-04-11 06:19:57.070871 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:57.070882 | orchestrator | 2026-04-11 06:19:57.070894 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:19:57.070906 | orchestrator | Saturday 11 April 2026 06:19:48 +0000 (0:00:01.160) 1:09:44.730 ******** 2026-04-11 06:19:57.070918 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:19:57.070930 | orchestrator | 2026-04-11 06:19:57.070942 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:19:57.070954 | orchestrator | Saturday 11 April 2026 06:19:49 +0000 (0:00:01.473) 1:09:46.204 ******** 2026-04-11 06:19:57.070965 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:57.070977 | orchestrator | 2026-04-11 06:19:57.070989 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-11 06:19:57.071001 | orchestrator | Saturday 11 April 2026 06:19:51 +0000 (0:00:01.148) 1:09:47.352 ******** 2026-04-11 06:19:57.071012 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:57.071024 | orchestrator | 2026-04-11 06:19:57.071039 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-11 06:19:57.071053 | orchestrator | Saturday 11 April 2026 06:19:52 +0000 (0:00:01.260) 1:09:48.613 ******** 2026-04-11 06:19:57.071067 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:57.071081 | orchestrator | 2026-04-11 06:19:57.071095 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-11 06:19:57.071108 | orchestrator | Saturday 11 April 2026 06:19:53 +0000 (0:00:01.173) 1:09:49.787 ******** 2026-04-11 06:19:57.071127 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-11 06:19:57.071142 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-11 06:19:57.071156 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-11 06:19:57.071170 | orchestrator | 2026-04-11 06:19:57.071184 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-11 06:19:57.071197 | orchestrator | Saturday 11 April 2026 06:19:55 +0000 (0:00:02.095) 1:09:51.883 ******** 2026-04-11 06:19:57.071246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-11 06:19:57.071260 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-11 06:19:57.071273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-11 06:19:57.071286 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:19:57.071299 | orchestrator | 2026-04-11 06:19:57.071310 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-11 06:19:57.071321 | orchestrator | Saturday 11 April 2026 06:19:56 +0000 (0:00:01.152) 1:09:53.035 ******** 2026-04-11 06:19:57.071332 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-11 06:19:57.071343 | orchestrator | 2026-04-11 06:19:57.071362 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:20:39.265020 | orchestrator | Saturday 11 April 2026 06:19:57 +0000 (0:00:01.160) 1:09:54.195 ******** 2026-04-11 06:20:39.265098 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265105 | orchestrator | 2026-04-11 06:20:39.265110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:20:39.265114 | orchestrator | Saturday 11 April 2026 06:19:59 +0000 (0:00:01.232) 1:09:55.428 ******** 2026-04-11 06:20:39.265118 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265122 | orchestrator | 2026-04-11 06:20:39.265126 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:20:39.265130 | orchestrator | Saturday 11 April 2026 06:20:00 +0000 (0:00:01.132) 1:09:56.561 ******** 2026-04-11 06:20:39.265134 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265138 | orchestrator | 2026-04-11 06:20:39.265142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:20:39.265146 | orchestrator | Saturday 11 April 2026 06:20:01 +0000 (0:00:01.118) 1:09:57.679 ******** 2026-04-11 06:20:39.265150 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265155 | orchestrator | 2026-04-11 06:20:39.265158 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:20:39.265162 | orchestrator | Saturday 11 April 2026 06:20:02 +0000 (0:00:01.245) 1:09:58.924 ******** 2026-04-11 06:20:39.265166 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 06:20:39.265171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 06:20:39.265175 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 06:20:39.265178 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265182 | orchestrator | 2026-04-11 06:20:39.265186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:20:39.265190 | orchestrator | Saturday 11 April 2026 06:20:04 +0000 (0:00:01.421) 1:10:00.346 ******** 2026-04-11 06:20:39.265205 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 06:20:39.265209 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 06:20:39.265213 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 06:20:39.265217 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265221 | orchestrator | 2026-04-11 06:20:39.265224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:20:39.265228 | orchestrator | Saturday 11 April 2026 06:20:05 +0000 (0:00:01.385) 1:10:01.732 ******** 2026-04-11 06:20:39.265244 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 06:20:39.265248 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 06:20:39.265252 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 06:20:39.265256 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265289 | orchestrator | 2026-04-11 06:20:39.265294 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:20:39.265298 | orchestrator | Saturday 11 April 2026 06:20:06 +0000 (0:00:01.476) 1:10:03.209 ******** 2026-04-11 06:20:39.265302 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265306 | orchestrator | 2026-04-11 06:20:39.265309 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:20:39.265313 | orchestrator | Saturday 11 April 2026 06:20:08 +0000 (0:00:01.211) 1:10:04.420 ******** 2026-04-11 06:20:39.265317 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 06:20:39.265321 | orchestrator | 2026-04-11 06:20:39.265324 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-11 06:20:39.265328 | orchestrator | Saturday 11 April 2026 06:20:09 +0000 (0:00:01.268) 1:10:05.689 ******** 2026-04-11 06:20:39.265332 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:20:39.265337 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:20:39.265340 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:20:39.265344 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 06:20:39.265348 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:20:39.265352 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-11 06:20:39.265356 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:20:39.265359 | orchestrator | 2026-04-11 06:20:39.265363 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-11 06:20:39.265367 | orchestrator | Saturday 11 April 2026 06:20:11 +0000 (0:00:02.015) 1:10:07.705 ******** 2026-04-11 06:20:39.265370 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-11 06:20:39.265374 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-11 06:20:39.265378 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-11 06:20:39.265382 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-11 06:20:39.265385 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-11 06:20:39.265389 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-11 06:20:39.265393 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-11 06:20:39.265396 | orchestrator | 2026-04-11 06:20:39.265400 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-11 06:20:39.265404 | orchestrator | Saturday 11 April 2026 06:20:13 +0000 (0:00:02.403) 1:10:10.109 ******** 2026-04-11 06:20:39.265408 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:20:39.265411 | orchestrator | 2026-04-11 06:20:39.265424 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-11 06:20:39.265428 | orchestrator | Saturday 11 April 2026 06:20:15 +0000 (0:00:01.943) 1:10:12.053 ******** 2026-04-11 06:20:39.265432 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:20:39.265437 | orchestrator | 2026-04-11 06:20:39.265441 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-11 06:20:39.265445 | orchestrator | Saturday 11 April 2026 06:20:18 +0000 (0:00:02.639) 1:10:14.692 ******** 2026-04-11 06:20:39.265449 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:20:39.265456 | orchestrator | 2026-04-11 06:20:39.265460 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 06:20:39.265464 | orchestrator | Saturday 11 April 2026 06:20:20 +0000 (0:00:01.910) 1:10:16.603 ******** 2026-04-11 06:20:39.265468 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-11 06:20:39.265472 | orchestrator | 2026-04-11 06:20:39.265476 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 06:20:39.265479 | orchestrator | Saturday 11 April 2026 06:20:21 +0000 (0:00:01.128) 1:10:17.732 ******** 2026-04-11 06:20:39.265483 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-11 06:20:39.265487 | orchestrator | 2026-04-11 06:20:39.265490 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 06:20:39.265494 | orchestrator | Saturday 11 April 2026 06:20:22 +0000 (0:00:01.117) 1:10:18.849 ******** 2026-04-11 06:20:39.265498 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265502 | orchestrator | 2026-04-11 06:20:39.265508 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 06:20:39.265512 | orchestrator | Saturday 11 April 2026 06:20:23 +0000 (0:00:01.132) 1:10:19.982 ******** 2026-04-11 06:20:39.265516 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265519 | orchestrator | 2026-04-11 06:20:39.265523 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 06:20:39.265527 | orchestrator | Saturday 11 April 2026 06:20:25 +0000 (0:00:01.541) 1:10:21.524 ******** 2026-04-11 06:20:39.265531 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265534 | orchestrator | 2026-04-11 06:20:39.265538 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 06:20:39.265542 | orchestrator | Saturday 11 April 2026 06:20:26 +0000 (0:00:01.602) 1:10:23.127 ******** 2026-04-11 06:20:39.265545 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265549 | orchestrator | 2026-04-11 06:20:39.265553 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 06:20:39.265557 | orchestrator | Saturday 11 April 2026 06:20:28 +0000 (0:00:01.543) 1:10:24.671 ******** 2026-04-11 06:20:39.265560 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265564 | orchestrator | 2026-04-11 06:20:39.265568 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 06:20:39.265571 | orchestrator | Saturday 11 April 2026 06:20:29 +0000 (0:00:01.143) 1:10:25.815 ******** 2026-04-11 06:20:39.265575 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265579 | orchestrator | 2026-04-11 06:20:39.265583 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 06:20:39.265586 | orchestrator | Saturday 11 April 2026 06:20:30 +0000 (0:00:01.117) 1:10:26.932 ******** 2026-04-11 06:20:39.265590 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265594 | orchestrator | 2026-04-11 06:20:39.265597 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 06:20:39.265601 | orchestrator | Saturday 11 April 2026 06:20:31 +0000 (0:00:01.107) 1:10:28.040 ******** 2026-04-11 06:20:39.265605 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265609 | orchestrator | 2026-04-11 06:20:39.265612 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 06:20:39.265616 | orchestrator | Saturday 11 April 2026 06:20:33 +0000 (0:00:01.616) 1:10:29.657 ******** 2026-04-11 06:20:39.265620 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265624 | orchestrator | 2026-04-11 06:20:39.265627 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 06:20:39.265631 | orchestrator | Saturday 11 April 2026 06:20:35 +0000 (0:00:01.599) 1:10:31.256 ******** 2026-04-11 06:20:39.265635 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265639 | orchestrator | 2026-04-11 06:20:39.265642 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 06:20:39.265649 | orchestrator | Saturday 11 April 2026 06:20:35 +0000 (0:00:00.789) 1:10:32.046 ******** 2026-04-11 06:20:39.265653 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265657 | orchestrator | 2026-04-11 06:20:39.265660 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 06:20:39.265664 | orchestrator | Saturday 11 April 2026 06:20:36 +0000 (0:00:00.821) 1:10:32.867 ******** 2026-04-11 06:20:39.265668 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265672 | orchestrator | 2026-04-11 06:20:39.265675 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 06:20:39.265679 | orchestrator | Saturday 11 April 2026 06:20:37 +0000 (0:00:00.815) 1:10:33.683 ******** 2026-04-11 06:20:39.265683 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265686 | orchestrator | 2026-04-11 06:20:39.265690 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 06:20:39.265694 | orchestrator | Saturday 11 April 2026 06:20:38 +0000 (0:00:00.815) 1:10:34.498 ******** 2026-04-11 06:20:39.265698 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:20:39.265701 | orchestrator | 2026-04-11 06:20:39.265705 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 06:20:39.265709 | orchestrator | Saturday 11 April 2026 06:20:39 +0000 (0:00:00.792) 1:10:35.291 ******** 2026-04-11 06:20:39.265712 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:20:39.265716 | orchestrator | 2026-04-11 06:20:39.265722 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 06:21:20.649811 | orchestrator | Saturday 11 April 2026 06:20:39 +0000 (0:00:00.776) 1:10:36.067 ******** 2026-04-11 06:21:20.649926 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.649943 | orchestrator | 2026-04-11 06:21:20.649955 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 06:21:20.649967 | orchestrator | Saturday 11 April 2026 06:20:40 +0000 (0:00:00.778) 1:10:36.846 ******** 2026-04-11 06:21:20.649978 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.649989 | orchestrator | 2026-04-11 06:21:20.650000 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 06:21:20.650014 | orchestrator | Saturday 11 April 2026 06:20:41 +0000 (0:00:00.758) 1:10:37.605 ******** 2026-04-11 06:21:20.650110 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:21:20.650131 | orchestrator | 2026-04-11 06:21:20.650185 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 06:21:20.650207 | orchestrator | Saturday 11 April 2026 06:20:42 +0000 (0:00:00.793) 1:10:38.399 ******** 2026-04-11 06:21:20.650226 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:21:20.650246 | orchestrator | 2026-04-11 06:21:20.650258 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-11 06:21:20.650269 | orchestrator | Saturday 11 April 2026 06:20:42 +0000 (0:00:00.776) 1:10:39.175 ******** 2026-04-11 06:21:20.650280 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650291 | orchestrator | 2026-04-11 06:21:20.650302 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-11 06:21:20.650350 | orchestrator | Saturday 11 April 2026 06:20:43 +0000 (0:00:00.898) 1:10:40.074 ******** 2026-04-11 06:21:20.650364 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650376 | orchestrator | 2026-04-11 06:21:20.650388 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-11 06:21:20.650401 | orchestrator | Saturday 11 April 2026 06:20:44 +0000 (0:00:00.772) 1:10:40.846 ******** 2026-04-11 06:21:20.650429 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650442 | orchestrator | 2026-04-11 06:21:20.650455 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-11 06:21:20.650467 | orchestrator | Saturday 11 April 2026 06:20:45 +0000 (0:00:00.738) 1:10:41.584 ******** 2026-04-11 06:21:20.650480 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650492 | orchestrator | 2026-04-11 06:21:20.650504 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-11 06:21:20.650541 | orchestrator | Saturday 11 April 2026 06:20:46 +0000 (0:00:00.808) 1:10:42.393 ******** 2026-04-11 06:21:20.650554 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650566 | orchestrator | 2026-04-11 06:21:20.650579 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-11 06:21:20.650592 | orchestrator | Saturday 11 April 2026 06:20:46 +0000 (0:00:00.785) 1:10:43.179 ******** 2026-04-11 06:21:20.650604 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650616 | orchestrator | 2026-04-11 06:21:20.650629 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-11 06:21:20.650642 | orchestrator | Saturday 11 April 2026 06:20:47 +0000 (0:00:00.762) 1:10:43.942 ******** 2026-04-11 06:21:20.650654 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650667 | orchestrator | 2026-04-11 06:21:20.650680 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-11 06:21:20.650692 | orchestrator | Saturday 11 April 2026 06:20:48 +0000 (0:00:00.761) 1:10:44.704 ******** 2026-04-11 06:21:20.650702 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650713 | orchestrator | 2026-04-11 06:21:20.650724 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-11 06:21:20.650735 | orchestrator | Saturday 11 April 2026 06:20:49 +0000 (0:00:00.760) 1:10:45.464 ******** 2026-04-11 06:21:20.650745 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650756 | orchestrator | 2026-04-11 06:21:20.650767 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-11 06:21:20.650778 | orchestrator | Saturday 11 April 2026 06:20:50 +0000 (0:00:00.769) 1:10:46.234 ******** 2026-04-11 06:21:20.650788 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650799 | orchestrator | 2026-04-11 06:21:20.650810 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-11 06:21:20.650820 | orchestrator | Saturday 11 April 2026 06:20:50 +0000 (0:00:00.768) 1:10:47.002 ******** 2026-04-11 06:21:20.650831 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650842 | orchestrator | 2026-04-11 06:21:20.650853 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-11 06:21:20.650864 | orchestrator | Saturday 11 April 2026 06:20:51 +0000 (0:00:00.785) 1:10:47.788 ******** 2026-04-11 06:21:20.650874 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.650885 | orchestrator | 2026-04-11 06:21:20.650896 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-11 06:21:20.650907 | orchestrator | Saturday 11 April 2026 06:20:52 +0000 (0:00:00.772) 1:10:48.561 ******** 2026-04-11 06:21:20.650918 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:21:20.650928 | orchestrator | 2026-04-11 06:21:20.650939 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-11 06:21:20.650950 | orchestrator | Saturday 11 April 2026 06:20:53 +0000 (0:00:01.592) 1:10:50.153 ******** 2026-04-11 06:21:20.650961 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:21:20.650972 | orchestrator | 2026-04-11 06:21:20.650982 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-11 06:21:20.650993 | orchestrator | Saturday 11 April 2026 06:20:56 +0000 (0:00:02.416) 1:10:52.570 ******** 2026-04-11 06:21:20.651004 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-11 06:21:20.651016 | orchestrator | 2026-04-11 06:21:20.651027 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-11 06:21:20.651037 | orchestrator | Saturday 11 April 2026 06:20:57 +0000 (0:00:01.123) 1:10:53.694 ******** 2026-04-11 06:21:20.651048 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651059 | orchestrator | 2026-04-11 06:21:20.651070 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-11 06:21:20.651119 | orchestrator | Saturday 11 April 2026 06:20:58 +0000 (0:00:01.161) 1:10:54.855 ******** 2026-04-11 06:21:20.651144 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651163 | orchestrator | 2026-04-11 06:21:20.651174 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-11 06:21:20.651185 | orchestrator | Saturday 11 April 2026 06:20:59 +0000 (0:00:01.139) 1:10:55.994 ******** 2026-04-11 06:21:20.651196 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-11 06:21:20.651207 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-11 06:21:20.651218 | orchestrator | 2026-04-11 06:21:20.651228 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-11 06:21:20.651239 | orchestrator | Saturday 11 April 2026 06:21:01 +0000 (0:00:01.790) 1:10:57.785 ******** 2026-04-11 06:21:20.651250 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:21:20.651261 | orchestrator | 2026-04-11 06:21:20.651272 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-11 06:21:20.651282 | orchestrator | Saturday 11 April 2026 06:21:03 +0000 (0:00:01.517) 1:10:59.303 ******** 2026-04-11 06:21:20.651293 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651304 | orchestrator | 2026-04-11 06:21:20.651344 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-11 06:21:20.651355 | orchestrator | Saturday 11 April 2026 06:21:04 +0000 (0:00:01.189) 1:11:00.492 ******** 2026-04-11 06:21:20.651366 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651377 | orchestrator | 2026-04-11 06:21:20.651388 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-11 06:21:20.651399 | orchestrator | Saturday 11 April 2026 06:21:05 +0000 (0:00:00.793) 1:11:01.286 ******** 2026-04-11 06:21:20.651409 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651420 | orchestrator | 2026-04-11 06:21:20.651437 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-11 06:21:20.651448 | orchestrator | Saturday 11 April 2026 06:21:05 +0000 (0:00:00.761) 1:11:02.047 ******** 2026-04-11 06:21:20.651459 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-11 06:21:20.651470 | orchestrator | 2026-04-11 06:21:20.651481 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-11 06:21:20.651491 | orchestrator | Saturday 11 April 2026 06:21:06 +0000 (0:00:01.160) 1:11:03.208 ******** 2026-04-11 06:21:20.651502 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:21:20.651513 | orchestrator | 2026-04-11 06:21:20.651524 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-11 06:21:20.651535 | orchestrator | Saturday 11 April 2026 06:21:08 +0000 (0:00:01.754) 1:11:04.963 ******** 2026-04-11 06:21:20.651546 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-11 06:21:20.651557 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-11 06:21:20.651567 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-11 06:21:20.651578 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651589 | orchestrator | 2026-04-11 06:21:20.651600 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-11 06:21:20.651610 | orchestrator | Saturday 11 April 2026 06:21:09 +0000 (0:00:01.240) 1:11:06.203 ******** 2026-04-11 06:21:20.651621 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651632 | orchestrator | 2026-04-11 06:21:20.651643 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-11 06:21:20.651654 | orchestrator | Saturday 11 April 2026 06:21:11 +0000 (0:00:01.144) 1:11:07.348 ******** 2026-04-11 06:21:20.651664 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651675 | orchestrator | 2026-04-11 06:21:20.651686 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-11 06:21:20.651697 | orchestrator | Saturday 11 April 2026 06:21:12 +0000 (0:00:01.146) 1:11:08.494 ******** 2026-04-11 06:21:20.651708 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651719 | orchestrator | 2026-04-11 06:21:20.651736 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-11 06:21:20.651747 | orchestrator | Saturday 11 April 2026 06:21:13 +0000 (0:00:01.194) 1:11:09.689 ******** 2026-04-11 06:21:20.651758 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651768 | orchestrator | 2026-04-11 06:21:20.651779 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-11 06:21:20.651790 | orchestrator | Saturday 11 April 2026 06:21:14 +0000 (0:00:01.128) 1:11:10.817 ******** 2026-04-11 06:21:20.651801 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.651811 | orchestrator | 2026-04-11 06:21:20.651822 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-11 06:21:20.651833 | orchestrator | Saturday 11 April 2026 06:21:15 +0000 (0:00:00.840) 1:11:11.658 ******** 2026-04-11 06:21:20.651844 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:21:20.651854 | orchestrator | 2026-04-11 06:21:20.651865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-11 06:21:20.651876 | orchestrator | Saturday 11 April 2026 06:21:17 +0000 (0:00:02.106) 1:11:13.765 ******** 2026-04-11 06:21:20.651887 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:21:20.651898 | orchestrator | 2026-04-11 06:21:20.651908 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-11 06:21:20.651919 | orchestrator | Saturday 11 April 2026 06:21:18 +0000 (0:00:00.795) 1:11:14.561 ******** 2026-04-11 06:21:20.651930 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-11 06:21:20.651941 | orchestrator | 2026-04-11 06:21:20.651951 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-11 06:21:20.651962 | orchestrator | Saturday 11 April 2026 06:21:19 +0000 (0:00:01.128) 1:11:15.689 ******** 2026-04-11 06:21:20.651985 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:21:20.652006 | orchestrator | 2026-04-11 06:21:20.652017 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-11 06:21:20.652034 | orchestrator | Saturday 11 April 2026 06:21:20 +0000 (0:00:01.160) 1:11:16.850 ******** 2026-04-11 06:22:02.058845 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.058965 | orchestrator | 2026-04-11 06:22:02.058982 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-11 06:22:02.058996 | orchestrator | Saturday 11 April 2026 06:21:21 +0000 (0:00:01.219) 1:11:18.069 ******** 2026-04-11 06:22:02.059007 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059018 | orchestrator | 2026-04-11 06:22:02.059029 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-11 06:22:02.059040 | orchestrator | Saturday 11 April 2026 06:21:22 +0000 (0:00:01.142) 1:11:19.212 ******** 2026-04-11 06:22:02.059051 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059062 | orchestrator | 2026-04-11 06:22:02.059073 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-11 06:22:02.059084 | orchestrator | Saturday 11 April 2026 06:21:24 +0000 (0:00:01.231) 1:11:20.444 ******** 2026-04-11 06:22:02.059095 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059105 | orchestrator | 2026-04-11 06:22:02.059117 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-11 06:22:02.059128 | orchestrator | Saturday 11 April 2026 06:21:25 +0000 (0:00:01.150) 1:11:21.595 ******** 2026-04-11 06:22:02.059138 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059149 | orchestrator | 2026-04-11 06:22:02.059160 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-11 06:22:02.059172 | orchestrator | Saturday 11 April 2026 06:21:26 +0000 (0:00:01.148) 1:11:22.743 ******** 2026-04-11 06:22:02.059183 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059194 | orchestrator | 2026-04-11 06:22:02.059205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-11 06:22:02.059232 | orchestrator | Saturday 11 April 2026 06:21:27 +0000 (0:00:01.162) 1:11:23.906 ******** 2026-04-11 06:22:02.059243 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059276 | orchestrator | 2026-04-11 06:22:02.059288 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-11 06:22:02.059299 | orchestrator | Saturday 11 April 2026 06:21:28 +0000 (0:00:01.219) 1:11:25.126 ******** 2026-04-11 06:22:02.059310 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:22:02.059321 | orchestrator | 2026-04-11 06:22:02.059332 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-11 06:22:02.059343 | orchestrator | Saturday 11 April 2026 06:21:29 +0000 (0:00:00.799) 1:11:25.926 ******** 2026-04-11 06:22:02.059354 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-11 06:22:02.059408 | orchestrator | 2026-04-11 06:22:02.059432 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-11 06:22:02.059446 | orchestrator | Saturday 11 April 2026 06:21:30 +0000 (0:00:01.142) 1:11:27.068 ******** 2026-04-11 06:22:02.059459 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-11 06:22:02.059472 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-11 06:22:02.059485 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-11 06:22:02.059498 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-11 06:22:02.059510 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-11 06:22:02.059524 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-11 06:22:02.059536 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-11 06:22:02.059549 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-11 06:22:02.059562 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-11 06:22:02.059575 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-11 06:22:02.059588 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-11 06:22:02.059601 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-11 06:22:02.059615 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-11 06:22:02.059627 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-11 06:22:02.059641 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-11 06:22:02.059653 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-11 06:22:02.059666 | orchestrator | 2026-04-11 06:22:02.059679 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-11 06:22:02.059691 | orchestrator | Saturday 11 April 2026 06:21:36 +0000 (0:00:06.121) 1:11:33.190 ******** 2026-04-11 06:22:02.059704 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-11 06:22:02.059718 | orchestrator | 2026-04-11 06:22:02.059731 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-11 06:22:02.059742 | orchestrator | Saturday 11 April 2026 06:21:38 +0000 (0:00:01.160) 1:11:34.351 ******** 2026-04-11 06:22:02.059753 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:22:02.059765 | orchestrator | 2026-04-11 06:22:02.059776 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-11 06:22:02.059787 | orchestrator | Saturday 11 April 2026 06:21:39 +0000 (0:00:01.506) 1:11:35.857 ******** 2026-04-11 06:22:02.059798 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:22:02.059809 | orchestrator | 2026-04-11 06:22:02.059819 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-11 06:22:02.059830 | orchestrator | Saturday 11 April 2026 06:21:41 +0000 (0:00:01.654) 1:11:37.512 ******** 2026-04-11 06:22:02.059841 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059851 | orchestrator | 2026-04-11 06:22:02.059862 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-11 06:22:02.059900 | orchestrator | Saturday 11 April 2026 06:21:42 +0000 (0:00:00.781) 1:11:38.293 ******** 2026-04-11 06:22:02.059912 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059923 | orchestrator | 2026-04-11 06:22:02.059934 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-11 06:22:02.059945 | orchestrator | Saturday 11 April 2026 06:21:42 +0000 (0:00:00.808) 1:11:39.101 ******** 2026-04-11 06:22:02.059955 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.059966 | orchestrator | 2026-04-11 06:22:02.059977 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-11 06:22:02.059987 | orchestrator | Saturday 11 April 2026 06:21:43 +0000 (0:00:00.764) 1:11:39.866 ******** 2026-04-11 06:22:02.059998 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060008 | orchestrator | 2026-04-11 06:22:02.060019 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-11 06:22:02.060030 | orchestrator | Saturday 11 April 2026 06:21:44 +0000 (0:00:00.808) 1:11:40.674 ******** 2026-04-11 06:22:02.060040 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060051 | orchestrator | 2026-04-11 06:22:02.060061 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-11 06:22:02.060072 | orchestrator | Saturday 11 April 2026 06:21:45 +0000 (0:00:00.782) 1:11:41.456 ******** 2026-04-11 06:22:02.060083 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060093 | orchestrator | 2026-04-11 06:22:02.060104 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-11 06:22:02.060115 | orchestrator | Saturday 11 April 2026 06:21:46 +0000 (0:00:00.840) 1:11:42.297 ******** 2026-04-11 06:22:02.060125 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060136 | orchestrator | 2026-04-11 06:22:02.060153 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-11 06:22:02.060164 | orchestrator | Saturday 11 April 2026 06:21:46 +0000 (0:00:00.781) 1:11:43.079 ******** 2026-04-11 06:22:02.060174 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060185 | orchestrator | 2026-04-11 06:22:02.060196 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-11 06:22:02.060206 | orchestrator | Saturday 11 April 2026 06:21:47 +0000 (0:00:00.813) 1:11:43.892 ******** 2026-04-11 06:22:02.060217 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060228 | orchestrator | 2026-04-11 06:22:02.060238 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-11 06:22:02.060249 | orchestrator | Saturday 11 April 2026 06:21:48 +0000 (0:00:00.826) 1:11:44.718 ******** 2026-04-11 06:22:02.060259 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060270 | orchestrator | 2026-04-11 06:22:02.060281 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-11 06:22:02.060291 | orchestrator | Saturday 11 April 2026 06:21:49 +0000 (0:00:00.778) 1:11:45.497 ******** 2026-04-11 06:22:02.060302 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060313 | orchestrator | 2026-04-11 06:22:02.060323 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-11 06:22:02.060334 | orchestrator | Saturday 11 April 2026 06:21:50 +0000 (0:00:00.823) 1:11:46.321 ******** 2026-04-11 06:22:02.060344 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-11 06:22:02.060355 | orchestrator | 2026-04-11 06:22:02.060382 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-11 06:22:02.060393 | orchestrator | Saturday 11 April 2026 06:21:54 +0000 (0:00:04.156) 1:11:50.477 ******** 2026-04-11 06:22:02.060403 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:22:02.060414 | orchestrator | 2026-04-11 06:22:02.060425 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-11 06:22:02.060436 | orchestrator | Saturday 11 April 2026 06:21:55 +0000 (0:00:00.921) 1:11:51.398 ******** 2026-04-11 06:22:02.060455 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-11 06:22:02.060469 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-11 06:22:02.060482 | orchestrator | 2026-04-11 06:22:02.060493 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-11 06:22:02.060504 | orchestrator | Saturday 11 April 2026 06:21:59 +0000 (0:00:04.495) 1:11:55.894 ******** 2026-04-11 06:22:02.060515 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060526 | orchestrator | 2026-04-11 06:22:02.060536 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-11 06:22:02.060547 | orchestrator | Saturday 11 April 2026 06:22:00 +0000 (0:00:00.771) 1:11:56.666 ******** 2026-04-11 06:22:02.060558 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060568 | orchestrator | 2026-04-11 06:22:02.060579 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-11 06:22:02.060590 | orchestrator | Saturday 11 April 2026 06:22:01 +0000 (0:00:00.787) 1:11:57.453 ******** 2026-04-11 06:22:02.060601 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:22:02.060612 | orchestrator | 2026-04-11 06:22:02.060623 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-11 06:22:02.060641 | orchestrator | Saturday 11 April 2026 06:22:02 +0000 (0:00:00.806) 1:11:58.260 ******** 2026-04-11 06:23:08.032883 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.033000 | orchestrator | 2026-04-11 06:23:08.033017 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-11 06:23:08.033030 | orchestrator | Saturday 11 April 2026 06:22:02 +0000 (0:00:00.825) 1:11:59.086 ******** 2026-04-11 06:23:08.033041 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.033052 | orchestrator | 2026-04-11 06:23:08.033063 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-11 06:23:08.033074 | orchestrator | Saturday 11 April 2026 06:22:03 +0000 (0:00:00.842) 1:11:59.928 ******** 2026-04-11 06:23:08.033085 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:08.033097 | orchestrator | 2026-04-11 06:23:08.033108 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-11 06:23:08.033119 | orchestrator | Saturday 11 April 2026 06:22:04 +0000 (0:00:00.942) 1:12:00.870 ******** 2026-04-11 06:23:08.033130 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 06:23:08.033141 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 06:23:08.033152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 06:23:08.033163 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.033174 | orchestrator | 2026-04-11 06:23:08.033184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-11 06:23:08.033200 | orchestrator | Saturday 11 April 2026 06:22:05 +0000 (0:00:01.106) 1:12:01.976 ******** 2026-04-11 06:23:08.033211 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 06:23:08.033222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 06:23:08.033249 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 06:23:08.033260 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.033271 | orchestrator | 2026-04-11 06:23:08.033282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-11 06:23:08.033316 | orchestrator | Saturday 11 April 2026 06:22:06 +0000 (0:00:01.061) 1:12:03.038 ******** 2026-04-11 06:23:08.033328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-11 06:23:08.033338 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-11 06:23:08.033349 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-11 06:23:08.033359 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.033370 | orchestrator | 2026-04-11 06:23:08.033381 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-11 06:23:08.033391 | orchestrator | Saturday 11 April 2026 06:22:07 +0000 (0:00:01.118) 1:12:04.156 ******** 2026-04-11 06:23:08.033402 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:08.033413 | orchestrator | 2026-04-11 06:23:08.033427 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-11 06:23:08.033470 | orchestrator | Saturday 11 April 2026 06:22:08 +0000 (0:00:00.845) 1:12:05.002 ******** 2026-04-11 06:23:08.033483 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-11 06:23:08.033495 | orchestrator | 2026-04-11 06:23:08.033508 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-11 06:23:08.033521 | orchestrator | Saturday 11 April 2026 06:22:09 +0000 (0:00:01.007) 1:12:06.009 ******** 2026-04-11 06:23:08.033533 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:08.033546 | orchestrator | 2026-04-11 06:23:08.033559 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-11 06:23:08.033572 | orchestrator | Saturday 11 April 2026 06:22:11 +0000 (0:00:02.001) 1:12:08.011 ******** 2026-04-11 06:23:08.033585 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-04-11 06:23:08.033597 | orchestrator | 2026-04-11 06:23:08.033610 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 06:23:08.033623 | orchestrator | Saturday 11 April 2026 06:22:12 +0000 (0:00:01.116) 1:12:09.127 ******** 2026-04-11 06:23:08.033635 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:23:08.033648 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 06:23:08.033662 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 06:23:08.033674 | orchestrator | 2026-04-11 06:23:08.033687 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 06:23:08.033700 | orchestrator | Saturday 11 April 2026 06:22:16 +0000 (0:00:03.226) 1:12:12.354 ******** 2026-04-11 06:23:08.033712 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-11 06:23:08.033725 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-11 06:23:08.033738 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:08.033751 | orchestrator | 2026-04-11 06:23:08.033764 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-11 06:23:08.033777 | orchestrator | Saturday 11 April 2026 06:22:18 +0000 (0:00:01.966) 1:12:14.321 ******** 2026-04-11 06:23:08.033787 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.033798 | orchestrator | 2026-04-11 06:23:08.033809 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-11 06:23:08.033819 | orchestrator | Saturday 11 April 2026 06:22:18 +0000 (0:00:00.767) 1:12:15.088 ******** 2026-04-11 06:23:08.033830 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-04-11 06:23:08.033841 | orchestrator | 2026-04-11 06:23:08.033852 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-11 06:23:08.033863 | orchestrator | Saturday 11 April 2026 06:22:19 +0000 (0:00:01.105) 1:12:16.194 ******** 2026-04-11 06:23:08.033875 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:23:08.033887 | orchestrator | 2026-04-11 06:23:08.033898 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-11 06:23:08.033909 | orchestrator | Saturday 11 April 2026 06:22:21 +0000 (0:00:01.698) 1:12:17.893 ******** 2026-04-11 06:23:08.033945 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:23:08.033958 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-11 06:23:08.033969 | orchestrator | 2026-04-11 06:23:08.033979 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-11 06:23:08.033990 | orchestrator | Saturday 11 April 2026 06:22:26 +0000 (0:00:05.242) 1:12:23.136 ******** 2026-04-11 06:23:08.034001 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-11 06:23:08.034012 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-11 06:23:08.034127 | orchestrator | 2026-04-11 06:23:08.034139 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-11 06:23:08.034150 | orchestrator | Saturday 11 April 2026 06:22:30 +0000 (0:00:03.121) 1:12:26.257 ******** 2026-04-11 06:23:08.034160 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-11 06:23:08.034171 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:08.034182 | orchestrator | 2026-04-11 06:23:08.034192 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-11 06:23:08.034203 | orchestrator | Saturday 11 April 2026 06:22:31 +0000 (0:00:01.696) 1:12:27.954 ******** 2026-04-11 06:23:08.034214 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-04-11 06:23:08.034225 | orchestrator | 2026-04-11 06:23:08.034235 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-11 06:23:08.034253 | orchestrator | Saturday 11 April 2026 06:22:33 +0000 (0:00:01.327) 1:12:29.281 ******** 2026-04-11 06:23:08.034265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034319 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.034330 | orchestrator | 2026-04-11 06:23:08.034341 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-11 06:23:08.034352 | orchestrator | Saturday 11 April 2026 06:22:34 +0000 (0:00:01.597) 1:12:30.879 ******** 2026-04-11 06:23:08.034363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-11 06:23:08.034416 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.034427 | orchestrator | 2026-04-11 06:23:08.034458 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-11 06:23:08.034469 | orchestrator | Saturday 11 April 2026 06:22:36 +0000 (0:00:01.653) 1:12:32.532 ******** 2026-04-11 06:23:08.034488 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:23:08.034500 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:23:08.034511 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:23:08.034522 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:23:08.034533 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-11 06:23:08.034544 | orchestrator | 2026-04-11 06:23:08.034555 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-11 06:23:08.034566 | orchestrator | Saturday 11 April 2026 06:23:07 +0000 (0:00:30.921) 1:13:03.453 ******** 2026-04-11 06:23:08.034576 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:08.034587 | orchestrator | 2026-04-11 06:23:08.034598 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-11 06:23:08.034616 | orchestrator | Saturday 11 April 2026 06:23:08 +0000 (0:00:00.779) 1:13:04.233 ******** 2026-04-11 06:23:59.981900 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:59.982078 | orchestrator | 2026-04-11 06:23:59.982128 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-11 06:23:59.982143 | orchestrator | Saturday 11 April 2026 06:23:08 +0000 (0:00:00.780) 1:13:05.014 ******** 2026-04-11 06:23:59.982155 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-04-11 06:23:59.982166 | orchestrator | 2026-04-11 06:23:59.982177 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-11 06:23:59.982188 | orchestrator | Saturday 11 April 2026 06:23:09 +0000 (0:00:01.143) 1:13:06.158 ******** 2026-04-11 06:23:59.982200 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-04-11 06:23:59.982210 | orchestrator | 2026-04-11 06:23:59.982221 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-11 06:23:59.982232 | orchestrator | Saturday 11 April 2026 06:23:11 +0000 (0:00:01.138) 1:13:07.296 ******** 2026-04-11 06:23:59.982243 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.982255 | orchestrator | 2026-04-11 06:23:59.982266 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-11 06:23:59.982277 | orchestrator | Saturday 11 April 2026 06:23:13 +0000 (0:00:02.081) 1:13:09.378 ******** 2026-04-11 06:23:59.982288 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.982299 | orchestrator | 2026-04-11 06:23:59.982309 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-11 06:23:59.982320 | orchestrator | Saturday 11 April 2026 06:23:15 +0000 (0:00:01.971) 1:13:11.350 ******** 2026-04-11 06:23:59.982345 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.982356 | orchestrator | 2026-04-11 06:23:59.982367 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-11 06:23:59.982378 | orchestrator | Saturday 11 April 2026 06:23:17 +0000 (0:00:02.284) 1:13:13.634 ******** 2026-04-11 06:23:59.982390 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-11 06:23:59.982403 | orchestrator | 2026-04-11 06:23:59.982413 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-04-11 06:23:59.982424 | orchestrator | skipping: no hosts matched 2026-04-11 06:23:59.982438 | orchestrator | 2026-04-11 06:23:59.982450 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-04-11 06:23:59.982462 | orchestrator | skipping: no hosts matched 2026-04-11 06:23:59.982519 | orchestrator | 2026-04-11 06:23:59.982533 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-04-11 06:23:59.982546 | orchestrator | skipping: no hosts matched 2026-04-11 06:23:59.982558 | orchestrator | 2026-04-11 06:23:59.982571 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-04-11 06:23:59.982583 | orchestrator | 2026-04-11 06:23:59.982596 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-04-11 06:23:59.982608 | orchestrator | Saturday 11 April 2026 06:23:21 +0000 (0:00:04.230) 1:13:17.865 ******** 2026-04-11 06:23:59.982620 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:23:59.982633 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:23:59.982646 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:23:59.982659 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:23:59.982671 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:23:59.982684 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:23:59.982696 | orchestrator | 2026-04-11 06:23:59.982708 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-04-11 06:23:59.982721 | orchestrator | Saturday 11 April 2026 06:23:24 +0000 (0:00:02.809) 1:13:20.674 ******** 2026-04-11 06:23:59.982733 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:23:59.982746 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:23:59.982758 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:23:59.982770 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:23:59.982782 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:23:59.982794 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:23:59.982805 | orchestrator | 2026-04-11 06:23:59.982816 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:23:59.982826 | orchestrator | Saturday 11 April 2026 06:23:27 +0000 (0:00:03.399) 1:13:24.073 ******** 2026-04-11 06:23:59.982837 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:23:59.982848 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:23:59.982858 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:23:59.982869 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:23:59.982879 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:23:59.982890 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.982901 | orchestrator | 2026-04-11 06:23:59.982911 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:23:59.982922 | orchestrator | Saturday 11 April 2026 06:23:30 +0000 (0:00:02.224) 1:13:26.298 ******** 2026-04-11 06:23:59.982933 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:23:59.982943 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:23:59.982954 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:23:59.982964 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:23:59.982975 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:23:59.982985 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.982996 | orchestrator | 2026-04-11 06:23:59.983006 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-11 06:23:59.983017 | orchestrator | Saturday 11 April 2026 06:23:32 +0000 (0:00:02.075) 1:13:28.374 ******** 2026-04-11 06:23:59.983028 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:23:59.983041 | orchestrator | 2026-04-11 06:23:59.983051 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-11 06:23:59.983062 | orchestrator | Saturday 11 April 2026 06:23:34 +0000 (0:00:02.359) 1:13:30.734 ******** 2026-04-11 06:23:59.983073 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:23:59.983084 | orchestrator | 2026-04-11 06:23:59.983112 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-11 06:23:59.983124 | orchestrator | Saturday 11 April 2026 06:23:36 +0000 (0:00:02.357) 1:13:33.091 ******** 2026-04-11 06:23:59.983134 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:23:59.983153 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:23:59.983164 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:23:59.983175 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:23:59.983186 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:23:59.983196 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:59.983207 | orchestrator | 2026-04-11 06:23:59.983218 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-11 06:23:59.983229 | orchestrator | Saturday 11 April 2026 06:23:39 +0000 (0:00:02.625) 1:13:35.717 ******** 2026-04-11 06:23:59.983239 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:23:59.983250 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:23:59.983260 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:23:59.983271 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:23:59.983282 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:23:59.983293 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.983303 | orchestrator | 2026-04-11 06:23:59.983314 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-11 06:23:59.983325 | orchestrator | Saturday 11 April 2026 06:23:41 +0000 (0:00:02.060) 1:13:37.777 ******** 2026-04-11 06:23:59.983336 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:23:59.983347 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:23:59.983357 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:23:59.983368 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:23:59.983378 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:23:59.983389 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.983400 | orchestrator | 2026-04-11 06:23:59.983416 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-11 06:23:59.983427 | orchestrator | Saturday 11 April 2026 06:23:44 +0000 (0:00:02.564) 1:13:40.342 ******** 2026-04-11 06:23:59.983438 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:23:59.983448 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:23:59.983459 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:23:59.983470 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:23:59.983481 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:23:59.983513 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.983524 | orchestrator | 2026-04-11 06:23:59.983534 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-11 06:23:59.983545 | orchestrator | Saturday 11 April 2026 06:23:46 +0000 (0:00:02.375) 1:13:42.717 ******** 2026-04-11 06:23:59.983556 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:23:59.983567 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:23:59.983577 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:23:59.983588 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:23:59.983599 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:59.983610 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:23:59.983620 | orchestrator | 2026-04-11 06:23:59.983631 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-11 06:23:59.983642 | orchestrator | Saturday 11 April 2026 06:23:48 +0000 (0:00:02.318) 1:13:45.036 ******** 2026-04-11 06:23:59.983652 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:23:59.983663 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:23:59.983674 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:23:59.983684 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:23:59.983695 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:23:59.983706 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:59.983716 | orchestrator | 2026-04-11 06:23:59.983727 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-11 06:23:59.983738 | orchestrator | Saturday 11 April 2026 06:23:50 +0000 (0:00:01.768) 1:13:46.804 ******** 2026-04-11 06:23:59.983749 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:23:59.983759 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:23:59.983770 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:23:59.983780 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:23:59.983791 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:23:59.983810 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:59.983821 | orchestrator | 2026-04-11 06:23:59.983832 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-11 06:23:59.983842 | orchestrator | Saturday 11 April 2026 06:23:52 +0000 (0:00:01.717) 1:13:48.522 ******** 2026-04-11 06:23:59.983854 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:23:59.983864 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:23:59.983875 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:23:59.983886 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:23:59.983896 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:23:59.983907 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.983917 | orchestrator | 2026-04-11 06:23:59.983928 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-11 06:23:59.983939 | orchestrator | Saturday 11 April 2026 06:23:54 +0000 (0:00:02.514) 1:13:51.036 ******** 2026-04-11 06:23:59.983950 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:23:59.983960 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:23:59.983971 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:23:59.983981 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:23:59.983992 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:23:59.984002 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:23:59.984013 | orchestrator | 2026-04-11 06:23:59.984024 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-11 06:23:59.984035 | orchestrator | Saturday 11 April 2026 06:23:56 +0000 (0:00:02.157) 1:13:53.194 ******** 2026-04-11 06:23:59.984045 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:23:59.984056 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:23:59.984067 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:23:59.984077 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:23:59.984088 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:23:59.984099 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:59.984117 | orchestrator | 2026-04-11 06:23:59.984136 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-11 06:23:59.984154 | orchestrator | Saturday 11 April 2026 06:23:59 +0000 (0:00:02.145) 1:13:55.340 ******** 2026-04-11 06:23:59.984172 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:23:59.984190 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:23:59.984207 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:23:59.984225 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:23:59.984241 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:23:59.984259 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:23:59.984276 | orchestrator | 2026-04-11 06:23:59.984305 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-11 06:24:57.468233 | orchestrator | Saturday 11 April 2026 06:24:01 +0000 (0:00:01.913) 1:13:57.253 ******** 2026-04-11 06:24:57.468354 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.468372 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.468384 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.468395 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:24:57.468407 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:24:57.468418 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:24:57.468429 | orchestrator | 2026-04-11 06:24:57.468441 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-11 06:24:57.468452 | orchestrator | Saturday 11 April 2026 06:24:03 +0000 (0:00:02.116) 1:13:59.370 ******** 2026-04-11 06:24:57.468463 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.468474 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.468485 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.468496 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:24:57.468507 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:24:57.468518 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:24:57.468529 | orchestrator | 2026-04-11 06:24:57.468539 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-11 06:24:57.468613 | orchestrator | Saturday 11 April 2026 06:24:04 +0000 (0:00:01.720) 1:14:01.091 ******** 2026-04-11 06:24:57.468663 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.468682 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.468701 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.468719 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:24:57.468738 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:24:57.468758 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:24:57.468778 | orchestrator | 2026-04-11 06:24:57.468812 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-11 06:24:57.468826 | orchestrator | Saturday 11 April 2026 06:24:07 +0000 (0:00:02.156) 1:14:03.248 ******** 2026-04-11 06:24:57.468839 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.468851 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.468863 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.468882 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:24:57.468902 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:24:57.468921 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:24:57.468940 | orchestrator | 2026-04-11 06:24:57.468955 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-11 06:24:57.468969 | orchestrator | Saturday 11 April 2026 06:24:09 +0000 (0:00:02.018) 1:14:05.266 ******** 2026-04-11 06:24:57.468981 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.468994 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.469006 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.469018 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:24:57.469030 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:24:57.469042 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:24:57.469055 | orchestrator | 2026-04-11 06:24:57.469067 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-11 06:24:57.469079 | orchestrator | Saturday 11 April 2026 06:24:11 +0000 (0:00:02.039) 1:14:07.306 ******** 2026-04-11 06:24:57.469091 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.469104 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:24:57.469116 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:24:57.469127 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:24:57.469137 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:24:57.469148 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:24:57.469159 | orchestrator | 2026-04-11 06:24:57.469170 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-11 06:24:57.469180 | orchestrator | Saturday 11 April 2026 06:24:12 +0000 (0:00:01.800) 1:14:09.106 ******** 2026-04-11 06:24:57.469191 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.469202 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:24:57.469212 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:24:57.469223 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:24:57.469233 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:24:57.469244 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:24:57.469254 | orchestrator | 2026-04-11 06:24:57.469265 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-11 06:24:57.469275 | orchestrator | Saturday 11 April 2026 06:24:14 +0000 (0:00:01.812) 1:14:10.919 ******** 2026-04-11 06:24:57.469286 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.469296 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:24:57.469307 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:24:57.469318 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:24:57.469328 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:24:57.469338 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:24:57.469363 | orchestrator | 2026-04-11 06:24:57.469374 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-11 06:24:57.469384 | orchestrator | Saturday 11 April 2026 06:24:17 +0000 (0:00:02.316) 1:14:13.235 ******** 2026-04-11 06:24:57.469395 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.469406 | orchestrator | 2026-04-11 06:24:57.469416 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-11 06:24:57.469427 | orchestrator | Saturday 11 April 2026 06:24:20 +0000 (0:00:03.056) 1:14:16.292 ******** 2026-04-11 06:24:57.469447 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.469458 | orchestrator | 2026-04-11 06:24:57.469469 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-11 06:24:57.469480 | orchestrator | Saturday 11 April 2026 06:24:23 +0000 (0:00:03.059) 1:14:19.351 ******** 2026-04-11 06:24:57.469490 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.469501 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:24:57.469511 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:24:57.469522 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:24:57.469533 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:24:57.469543 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:24:57.469581 | orchestrator | 2026-04-11 06:24:57.469593 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-11 06:24:57.469604 | orchestrator | Saturday 11 April 2026 06:24:26 +0000 (0:00:02.953) 1:14:22.305 ******** 2026-04-11 06:24:57.469615 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.469625 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:24:57.469636 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:24:57.469646 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:24:57.469657 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:24:57.469667 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:24:57.469678 | orchestrator | 2026-04-11 06:24:57.469688 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-11 06:24:57.469719 | orchestrator | Saturday 11 April 2026 06:24:28 +0000 (0:00:02.160) 1:14:24.466 ******** 2026-04-11 06:24:57.469732 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:24:57.469744 | orchestrator | 2026-04-11 06:24:57.469755 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-11 06:24:57.469766 | orchestrator | Saturday 11 April 2026 06:24:30 +0000 (0:00:02.613) 1:14:27.080 ******** 2026-04-11 06:24:57.469776 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.469787 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:24:57.469797 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:24:57.469808 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:24:57.469818 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:24:57.469829 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:24:57.469839 | orchestrator | 2026-04-11 06:24:57.469850 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-11 06:24:57.469861 | orchestrator | Saturday 11 April 2026 06:24:33 +0000 (0:00:02.976) 1:14:30.056 ******** 2026-04-11 06:24:57.469871 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:24:57.469882 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:24:57.469893 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:24:57.469903 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:24:57.469914 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:24:57.469925 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:24:57.469935 | orchestrator | 2026-04-11 06:24:57.469946 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-04-11 06:24:57.469956 | orchestrator | 2026-04-11 06:24:57.469973 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:24:57.469984 | orchestrator | Saturday 11 April 2026 06:24:39 +0000 (0:00:05.462) 1:14:35.519 ******** 2026-04-11 06:24:57.469995 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.470005 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:24:57.470078 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:24:57.470090 | orchestrator | 2026-04-11 06:24:57.470101 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:24:57.470112 | orchestrator | Saturday 11 April 2026 06:24:41 +0000 (0:00:01.699) 1:14:37.218 ******** 2026-04-11 06:24:57.470122 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.470133 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:24:57.470143 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:24:57.470154 | orchestrator | 2026-04-11 06:24:57.470535 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-11 06:24:57.470572 | orchestrator | Saturday 11 April 2026 06:24:42 +0000 (0:00:01.623) 1:14:38.842 ******** 2026-04-11 06:24:57.470583 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:24:57.470594 | orchestrator | 2026-04-11 06:24:57.470605 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-11 06:24:57.470616 | orchestrator | Saturday 11 April 2026 06:24:45 +0000 (0:00:02.477) 1:14:41.320 ******** 2026-04-11 06:24:57.470626 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.470637 | orchestrator | 2026-04-11 06:24:57.470648 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-04-11 06:24:57.470658 | orchestrator | 2026-04-11 06:24:57.470669 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-04-11 06:24:57.470680 | orchestrator | Saturday 11 April 2026 06:24:47 +0000 (0:00:01.958) 1:14:43.278 ******** 2026-04-11 06:24:57.470690 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.470701 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.470712 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.470722 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:24:57.470733 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:24:57.470744 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:24:57.470754 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:24:57.470765 | orchestrator | 2026-04-11 06:24:57.470775 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 06:24:57.470786 | orchestrator | Saturday 11 April 2026 06:24:49 +0000 (0:00:02.466) 1:14:45.744 ******** 2026-04-11 06:24:57.470797 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.470807 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.470818 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.470828 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:24:57.470839 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:24:57.470849 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:24:57.470860 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:24:57.470871 | orchestrator | 2026-04-11 06:24:57.470881 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-11 06:24:57.470892 | orchestrator | Saturday 11 April 2026 06:24:52 +0000 (0:00:02.474) 1:14:48.219 ******** 2026-04-11 06:24:57.470902 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.470913 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.470923 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.470934 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:24:57.470945 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:24:57.470956 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:24:57.470966 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:24:57.470977 | orchestrator | 2026-04-11 06:24:57.470987 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-11 06:24:57.470998 | orchestrator | Saturday 11 April 2026 06:24:54 +0000 (0:00:02.462) 1:14:50.682 ******** 2026-04-11 06:24:57.471009 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.471020 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.471030 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.471041 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:24:57.471051 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:24:57.471062 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:24:57.471072 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:24:57.471083 | orchestrator | 2026-04-11 06:24:57.471093 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-04-11 06:24:57.471104 | orchestrator | Saturday 11 April 2026 06:24:56 +0000 (0:00:02.522) 1:14:53.205 ******** 2026-04-11 06:24:57.471115 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:24:57.471125 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:24:57.471136 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:24:57.471158 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:25:44.879150 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:25:44.879293 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:25:44.879311 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879323 | orchestrator | 2026-04-11 06:25:44.879336 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-04-11 06:25:44.879348 | orchestrator | 2026-04-11 06:25:44.879360 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-04-11 06:25:44.879371 | orchestrator | Saturday 11 April 2026 06:25:00 +0000 (0:00:03.020) 1:14:56.225 ******** 2026-04-11 06:25:44.879383 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-04-11 06:25:44.879395 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-04-11 06:25:44.879406 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-04-11 06:25:44.879417 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879428 | orchestrator | 2026-04-11 06:25:44.879439 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-11 06:25:44.879450 | orchestrator | Saturday 11 April 2026 06:25:01 +0000 (0:00:01.268) 1:14:57.493 ******** 2026-04-11 06:25:44.879460 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879471 | orchestrator | 2026-04-11 06:25:44.879482 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-11 06:25:44.879493 | orchestrator | Saturday 11 April 2026 06:25:02 +0000 (0:00:01.114) 1:14:58.608 ******** 2026-04-11 06:25:44.879503 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879514 | orchestrator | 2026-04-11 06:25:44.879540 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-11 06:25:44.879552 | orchestrator | Saturday 11 April 2026 06:25:03 +0000 (0:00:01.112) 1:14:59.721 ******** 2026-04-11 06:25:44.879562 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879573 | orchestrator | 2026-04-11 06:25:44.879584 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-11 06:25:44.879624 | orchestrator | Saturday 11 April 2026 06:25:04 +0000 (0:00:01.131) 1:15:00.852 ******** 2026-04-11 06:25:44.879636 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879647 | orchestrator | 2026-04-11 06:25:44.879658 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-04-11 06:25:44.879668 | orchestrator | Saturday 11 April 2026 06:25:05 +0000 (0:00:01.136) 1:15:01.989 ******** 2026-04-11 06:25:44.879680 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-04-11 06:25:44.879694 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-04-11 06:25:44.879707 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879719 | orchestrator | 2026-04-11 06:25:44.879733 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-04-11 06:25:44.879746 | orchestrator | Saturday 11 April 2026 06:25:06 +0000 (0:00:01.112) 1:15:03.102 ******** 2026-04-11 06:25:44.879758 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879771 | orchestrator | 2026-04-11 06:25:44.879784 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-04-11 06:25:44.879797 | orchestrator | Saturday 11 April 2026 06:25:07 +0000 (0:00:01.107) 1:15:04.209 ******** 2026-04-11 06:25:44.879810 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879823 | orchestrator | 2026-04-11 06:25:44.879835 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-04-11 06:25:44.879847 | orchestrator | Saturday 11 April 2026 06:25:09 +0000 (0:00:01.126) 1:15:05.336 ******** 2026-04-11 06:25:44.879860 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879873 | orchestrator | 2026-04-11 06:25:44.879885 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-04-11 06:25:44.879898 | orchestrator | Saturday 11 April 2026 06:25:10 +0000 (0:00:01.105) 1:15:06.442 ******** 2026-04-11 06:25:44.879910 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-04-11 06:25:44.879923 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-04-11 06:25:44.879958 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.879970 | orchestrator | 2026-04-11 06:25:44.879982 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-04-11 06:25:44.879994 | orchestrator | Saturday 11 April 2026 06:25:11 +0000 (0:00:01.146) 1:15:07.589 ******** 2026-04-11 06:25:44.880006 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.880019 | orchestrator | 2026-04-11 06:25:44.880032 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-04-11 06:25:44.880043 | orchestrator | Saturday 11 April 2026 06:25:12 +0000 (0:00:01.131) 1:15:08.720 ******** 2026-04-11 06:25:44.880054 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.880064 | orchestrator | 2026-04-11 06:25:44.880075 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-04-11 06:25:44.880085 | orchestrator | Saturday 11 April 2026 06:25:13 +0000 (0:00:01.122) 1:15:09.842 ******** 2026-04-11 06:25:44.880096 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.880107 | orchestrator | 2026-04-11 06:25:44.880118 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-04-11 06:25:44.880128 | orchestrator | Saturday 11 April 2026 06:25:14 +0000 (0:00:01.136) 1:15:10.978 ******** 2026-04-11 06:25:44.880139 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:25:44.880150 | orchestrator | 2026-04-11 06:25:44.880160 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-04-11 06:25:44.880171 | orchestrator | 2026-04-11 06:25:44.880182 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-11 06:25:44.880192 | orchestrator | Saturday 11 April 2026 06:25:16 +0000 (0:00:01.626) 1:15:12.605 ******** 2026-04-11 06:25:44.880203 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:25:44.880214 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:25:44.880225 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:25:44.880235 | orchestrator | 2026-04-11 06:25:44.880246 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-11 06:25:44.880257 | orchestrator | Saturday 11 April 2026 06:25:18 +0000 (0:00:01.680) 1:15:14.286 ******** 2026-04-11 06:25:44.880267 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:25:44.880278 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:25:44.880307 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:25:44.880318 | orchestrator | 2026-04-11 06:25:44.880329 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-11 06:25:44.880340 | orchestrator | Saturday 11 April 2026 06:25:19 +0000 (0:00:01.394) 1:15:15.680 ******** 2026-04-11 06:25:44.880350 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:25:44.880361 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:25:44.880372 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:25:44.880383 | orchestrator | 2026-04-11 06:25:44.880393 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-11 06:25:44.880404 | orchestrator | Saturday 11 April 2026 06:25:20 +0000 (0:00:01.378) 1:15:17.059 ******** 2026-04-11 06:25:44.880415 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:25:44.880426 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:25:44.880436 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:25:44.880447 | orchestrator | 2026-04-11 06:25:44.880458 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-11 06:25:44.880469 | orchestrator | Saturday 11 April 2026 06:25:22 +0000 (0:00:01.428) 1:15:18.487 ******** 2026-04-11 06:25:44.880479 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:25:44.880490 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:25:44.880501 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:25:44.880512 | orchestrator | 2026-04-11 06:25:44.880522 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-04-11 06:25:44.880533 | orchestrator | Saturday 11 April 2026 06:25:23 +0000 (0:00:01.418) 1:15:19.906 ******** 2026-04-11 06:25:44.880549 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:25:44.880568 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:25:44.880580 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:25:44.880590 | orchestrator | 2026-04-11 06:25:44.880620 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-04-11 06:25:44.880631 | orchestrator | Saturday 11 April 2026 06:25:25 +0000 (0:00:01.355) 1:15:21.261 ******** 2026-04-11 06:25:44.880642 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:25:44.880653 | orchestrator | 2026-04-11 06:25:44.880663 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-04-11 06:25:44.880674 | orchestrator | 2026-04-11 06:25:44.880685 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-11 06:25:44.880696 | orchestrator | Saturday 11 April 2026 06:25:26 +0000 (0:00:01.859) 1:15:23.121 ******** 2026-04-11 06:25:44.880707 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:25:44.880718 | orchestrator | 2026-04-11 06:25:44.880729 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-11 06:25:44.880740 | orchestrator | Saturday 11 April 2026 06:25:28 +0000 (0:00:01.481) 1:15:24.602 ******** 2026-04-11 06:25:44.880750 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:25:44.880761 | orchestrator | 2026-04-11 06:25:44.880772 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-04-11 06:25:44.880783 | orchestrator | Saturday 11 April 2026 06:25:29 +0000 (0:00:01.176) 1:15:25.779 ******** 2026-04-11 06:25:44.880793 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:25:44.880804 | orchestrator | 2026-04-11 06:25:44.880815 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-04-11 06:25:44.880825 | orchestrator | Saturday 11 April 2026 06:25:30 +0000 (0:00:01.143) 1:15:26.923 ******** 2026-04-11 06:25:44.880836 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:25:44.880847 | orchestrator | 2026-04-11 06:25:44.880857 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-04-11 06:25:44.880868 | orchestrator | Saturday 11 April 2026 06:25:33 +0000 (0:00:02.902) 1:15:29.825 ******** 2026-04-11 06:25:44.880879 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:25:44.880890 | orchestrator | 2026-04-11 06:25:44.880900 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-04-11 06:25:44.880912 | orchestrator | Saturday 11 April 2026 06:25:37 +0000 (0:00:03.881) 1:15:33.707 ******** 2026-04-11 06:25:44.880929 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:25:44.880948 | orchestrator | 2026-04-11 06:25:44.880966 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-04-11 06:25:44.880985 | orchestrator | 2026-04-11 06:25:44.881002 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-04-11 06:25:44.881014 | orchestrator | Saturday 11 April 2026 06:25:39 +0000 (0:00:01.874) 1:15:35.582 ******** 2026-04-11 06:25:44.881025 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:25:44.881036 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:25:44.881046 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:25:44.881057 | orchestrator | 2026-04-11 06:25:44.881068 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-04-11 06:25:44.881079 | orchestrator | Saturday 11 April 2026 06:25:41 +0000 (0:00:01.971) 1:15:37.553 ******** 2026-04-11 06:25:44.881090 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:25:44.881101 | orchestrator | 2026-04-11 06:25:44.881111 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-04-11 06:25:44.881122 | orchestrator | Saturday 11 April 2026 06:25:43 +0000 (0:00:02.317) 1:15:39.871 ******** 2026-04-11 06:25:44.881133 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:25:44.881144 | orchestrator | 2026-04-11 06:25:44.881155 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 06:25:44.881166 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 06:25:44.881179 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-04-11 06:25:44.881199 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-04-11 06:25:44.881210 | orchestrator | testbed-node-1 : ok=191  changed=15  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-04-11 06:25:44.881228 | orchestrator | testbed-node-2 : ok=196  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-04-11 06:25:48.052037 | orchestrator | testbed-node-3 : ok=317  changed=21  unreachable=0 failed=0 skipped=362  rescued=0 ignored=0 2026-04-11 06:25:48.052144 | orchestrator | testbed-node-4 : ok=307  changed=18  unreachable=0 failed=0 skipped=359  rescued=0 ignored=0 2026-04-11 06:25:48.052161 | orchestrator | testbed-node-5 : ok=303  changed=18  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-04-11 06:25:48.052173 | orchestrator | 2026-04-11 06:25:48.052185 | orchestrator | 2026-04-11 06:25:48.052206 | orchestrator | 2026-04-11 06:25:48.052217 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 06:25:48.052231 | orchestrator | Saturday 11 April 2026 06:25:47 +0000 (0:00:03.735) 1:15:43.607 ******** 2026-04-11 06:25:48.052251 | orchestrator | =============================================================================== 2026-04-11 06:25:48.052262 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 75.15s 2026-04-11 06:25:48.052273 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 73.20s 2026-04-11 06:25:48.052303 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 32.54s 2026-04-11 06:25:48.052315 | orchestrator | Gather and delegate facts ---------------------------------------------- 32.11s 2026-04-11 06:25:48.052325 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.02s 2026-04-11 06:25:48.052336 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.92s 2026-04-11 06:25:48.052347 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.70s 2026-04-11 06:25:48.052357 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.16s 2026-04-11 06:25:48.052368 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.07s 2026-04-11 06:25:48.052378 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 22.98s 2026-04-11 06:25:48.052389 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.91s 2026-04-11 06:25:48.052400 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.62s 2026-04-11 06:25:48.052410 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.54s 2026-04-11 06:25:48.052421 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 13.73s 2026-04-11 06:25:48.052432 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.85s 2026-04-11 06:25:48.052443 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.80s 2026-04-11 06:25:48.052453 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.87s 2026-04-11 06:25:48.052464 | orchestrator | Stop ceph mon ---------------------------------------------------------- 11.46s 2026-04-11 06:25:48.052474 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.95s 2026-04-11 06:25:48.052485 | orchestrator | Stop standby ceph mds -------------------------------------------------- 10.81s 2026-04-11 06:25:48.272018 | orchestrator | + osism apply cephclient 2026-04-11 06:25:49.631385 | orchestrator | 2026-04-11 06:25:49 | INFO  | Prepare task for execution of cephclient. 2026-04-11 06:25:49.709024 | orchestrator | 2026-04-11 06:25:49 | INFO  | Task 0df4fb8a-487d-4a9b-9615-5472b37b7e94 (cephclient) was prepared for execution. 2026-04-11 06:25:49.710532 | orchestrator | 2026-04-11 06:25:49 | INFO  | It takes a moment until task 0df4fb8a-487d-4a9b-9615-5472b37b7e94 (cephclient) has been started and output is visible here. 2026-04-11 06:26:08.260706 | orchestrator | 2026-04-11 06:26:08.260850 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-11 06:26:08.260867 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-11 06:26:08.260881 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-11 06:26:08.260905 | orchestrator | 2026-04-11 06:26:08.260916 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-11 06:26:08.260928 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-11 06:26:08.260939 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-11 06:26:08.260961 | orchestrator | Saturday 11 April 2026 06:25:55 +0000 (0:00:01.876) 0:00:01.876 ******** 2026-04-11 06:26:08.260972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-11 06:26:08.260985 | orchestrator | 2026-04-11 06:26:08.260996 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-11 06:26:08.261007 | orchestrator | Saturday 11 April 2026 06:25:56 +0000 (0:00:00.741) 0:00:02.618 ******** 2026-04-11 06:26:08.261018 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-11 06:26:08.261029 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-11 06:26:08.261041 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-11 06:26:08.261052 | orchestrator | 2026-04-11 06:26:08.261063 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-11 06:26:08.261073 | orchestrator | Saturday 11 April 2026 06:25:57 +0000 (0:00:01.668) 0:00:04.286 ******** 2026-04-11 06:26:08.261084 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-11 06:26:08.261095 | orchestrator | 2026-04-11 06:26:08.261106 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-11 06:26:08.261116 | orchestrator | Saturday 11 April 2026 06:25:58 +0000 (0:00:01.114) 0:00:05.401 ******** 2026-04-11 06:26:08.261127 | orchestrator | ok: [testbed-manager] 2026-04-11 06:26:08.261138 | orchestrator | 2026-04-11 06:26:08.261149 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-11 06:26:08.261160 | orchestrator | Saturday 11 April 2026 06:25:59 +0000 (0:00:00.949) 0:00:06.350 ******** 2026-04-11 06:26:08.261171 | orchestrator | ok: [testbed-manager] 2026-04-11 06:26:08.261184 | orchestrator | 2026-04-11 06:26:08.261198 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-11 06:26:08.261210 | orchestrator | Saturday 11 April 2026 06:26:00 +0000 (0:00:00.925) 0:00:07.276 ******** 2026-04-11 06:26:08.261222 | orchestrator | ok: [testbed-manager] 2026-04-11 06:26:08.261235 | orchestrator | 2026-04-11 06:26:08.261248 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-11 06:26:08.261261 | orchestrator | Saturday 11 April 2026 06:26:02 +0000 (0:00:01.400) 0:00:08.677 ******** 2026-04-11 06:26:08.261295 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-11 06:26:08.261309 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-04-11 06:26:08.261323 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-11 06:26:08.261336 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-11 06:26:08.261349 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-11 06:26:08.261361 | orchestrator | 2026-04-11 06:26:08.261374 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-11 06:26:08.261417 | orchestrator | Saturday 11 April 2026 06:26:06 +0000 (0:00:04.061) 0:00:12.739 ******** 2026-04-11 06:26:08.261430 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-11 06:26:08.261443 | orchestrator | 2026-04-11 06:26:08.261457 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-11 06:26:08.261470 | orchestrator | Saturday 11 April 2026 06:26:06 +0000 (0:00:00.502) 0:00:13.241 ******** 2026-04-11 06:26:08.261482 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:26:08.261494 | orchestrator | 2026-04-11 06:26:08.261507 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-11 06:26:08.261520 | orchestrator | Saturday 11 April 2026 06:26:06 +0000 (0:00:00.167) 0:00:13.409 ******** 2026-04-11 06:26:08.261531 | orchestrator | skipping: [testbed-manager] 2026-04-11 06:26:08.261542 | orchestrator | 2026-04-11 06:26:08.261553 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 06:26:08.261563 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 06:26:08.261575 | orchestrator | 2026-04-11 06:26:08.261586 | orchestrator | 2026-04-11 06:26:08.261597 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 06:26:08.261608 | orchestrator | Saturday 11 April 2026 06:26:07 +0000 (0:00:00.900) 0:00:14.309 ******** 2026-04-11 06:26:08.261637 | orchestrator | =============================================================================== 2026-04-11 06:26:08.261648 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.06s 2026-04-11 06:26:08.261658 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.67s 2026-04-11 06:26:08.261669 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.40s 2026-04-11 06:26:08.261680 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2026-04-11 06:26:08.261690 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-04-11 06:26:08.261701 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.93s 2026-04-11 06:26:08.261731 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.90s 2026-04-11 06:26:08.261743 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.74s 2026-04-11 06:26:08.261754 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-04-11 06:26:08.261765 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.17s 2026-04-11 06:26:08.429676 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-11 06:26:08.429802 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-04-11 06:26:08.437264 | orchestrator | + set -e 2026-04-11 06:26:08.438491 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 06:26:08.438516 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 06:26:08.438528 | orchestrator | ++ INTERACTIVE=false 2026-04-11 06:26:08.438539 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 06:26:08.438550 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 06:26:08.438561 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 06:26:08.438572 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 06:26:08.438583 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 06:26:08.438594 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 06:26:08.438605 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 06:26:08.438653 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 06:26:08.438665 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 06:26:08.438676 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 06:26:08.438688 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 06:26:08.438699 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 06:26:08.438710 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 06:26:08.438722 | orchestrator | ++ export ARA=false 2026-04-11 06:26:08.438733 | orchestrator | ++ ARA=false 2026-04-11 06:26:08.438744 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 06:26:08.438755 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 06:26:08.438766 | orchestrator | ++ export TEMPEST=false 2026-04-11 06:26:08.438777 | orchestrator | ++ TEMPEST=false 2026-04-11 06:26:08.438821 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 06:26:08.438832 | orchestrator | ++ IS_ZUUL=true 2026-04-11 06:26:08.438843 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 06:26:08.438855 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 06:26:08.438866 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 06:26:08.438961 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 06:26:08.438977 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 06:26:08.438988 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 06:26:08.438999 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 06:26:08.439010 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 06:26:08.439021 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 06:26:08.439031 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 06:26:08.439042 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-11 06:26:08.439053 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-11 06:26:08.439064 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 06:26:08.439082 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 06:26:08.444703 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 06:26:08.444728 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 06:26:08.444739 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-11 06:26:08.444750 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-04-11 06:26:17.448053 | orchestrator | 2026-04-11 06:26:17 | ERROR  | Unable to get ansible vault password 2026-04-11 06:26:17.448174 | orchestrator | 2026-04-11 06:26:17 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-11 06:26:17.448191 | orchestrator | 2026-04-11 06:26:17 | ERROR  | Dropping encrypted entries 2026-04-11 06:26:17.482288 | orchestrator | 2026-04-11 06:26:17 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-11 06:26:17.482942 | orchestrator | 2026-04-11 06:26:17 | INFO  | Kolla configuration check passed 2026-04-11 06:26:17.647948 | orchestrator | 2026-04-11 06:26:17 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-04-11 06:26:17.669957 | orchestrator | 2026-04-11 06:26:17 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-04-11 06:26:17.936379 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-11 06:26:24.126252 | orchestrator | 2026-04-11 06:26:24 | ERROR  | Unable to get ansible vault password 2026-04-11 06:26:24.126346 | orchestrator | 2026-04-11 06:26:24 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-11 06:26:24.126358 | orchestrator | 2026-04-11 06:26:24 | ERROR  | Dropping encrypted entries 2026-04-11 06:26:24.159799 | orchestrator | 2026-04-11 06:26:24 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-11 06:26:24.292113 | orchestrator | 2026-04-11 06:26:24 | INFO  | Found 206 classic queue(s) in vhost '/': 2026-04-11 06:26:24.292212 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-04-11 06:26:24.292226 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-04-11 06:26:24.292238 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-04-11 06:26:24.292346 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-04-11 06:26:24.292364 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - barbican.workers_fanout_93d3a167a25d44af80c89a84b965a54e (vhost: /, messages: 0) 2026-04-11 06:26:24.292377 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - barbican.workers_fanout_a21455ce0cd64d9490c52a8d0c98eba2 (vhost: /, messages: 0) 2026-04-11 06:26:24.292388 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - barbican.workers_fanout_eee0e8a9a94c4fac9c6524cd04318aaa (vhost: /, messages: 0) 2026-04-11 06:26:24.292504 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-04-11 06:26:24.292521 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central (vhost: /, messages: 0) 2026-04-11 06:26:24.292544 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.292556 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.292566 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.292577 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central_fanout_011f0b2dc5b746ec86b1745fa898d349 (vhost: /, messages: 0) 2026-04-11 06:26:24.292588 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central_fanout_26dbae4cc52848938abfbc2eebc27595 (vhost: /, messages: 0) 2026-04-11 06:26:24.292599 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central_fanout_45cf1180b1c24418825232acd18acdf8 (vhost: /, messages: 0) 2026-04-11 06:26:24.292610 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central_fanout_7bfda07ac6d14424b344d7a0484abf5b (vhost: /, messages: 0) 2026-04-11 06:26:24.292985 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central_fanout_d918290925824e469505683d84702175 (vhost: /, messages: 0) 2026-04-11 06:26:24.293012 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - central_fanout_e8f3c6f5de264ddebb970dfa21a9894d (vhost: /, messages: 0) 2026-04-11 06:26:24.293023 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-04-11 06:26:24.293034 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.293045 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.293057 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.293069 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-backup_fanout_3e979e4a2c8c40a284f8a1f5f5ef90f3 (vhost: /, messages: 0) 2026-04-11 06:26:24.293354 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-backup_fanout_be8c75dc086e4b2e8ccb5ef22878bec7 (vhost: /, messages: 0) 2026-04-11 06:26:24.293374 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-backup_fanout_c68668e5ae8a49539f4c2c6d02894dd7 (vhost: /, messages: 0) 2026-04-11 06:26:24.293385 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-04-11 06:26:24.293412 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.293424 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.293435 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.293446 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-scheduler_fanout_1e02af7f37bb4950b15da96570c207af (vhost: /, messages: 0) 2026-04-11 06:26:24.293981 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-scheduler_fanout_21845c1c2d5b4bacbc45719c1b0bddb8 (vhost: /, messages: 0) 2026-04-11 06:26:24.294131 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-scheduler_fanout_8d692ae1e196427585ee6efe19d6ab2e (vhost: /, messages: 0) 2026-04-11 06:26:24.294143 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-04-11 06:26:24.294155 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-04-11 06:26:24.294266 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.294282 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_065fb9e069b14e6aaa9eabeafd4d0c97 (vhost: /, messages: 0) 2026-04-11 06:26:24.294301 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-04-11 06:26:24.294312 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.294323 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_502695c01f2b4fb9857670c3b05d9025 (vhost: /, messages: 0) 2026-04-11 06:26:24.294334 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-04-11 06:26:24.294345 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.294356 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_3f022494c5564e1c9b378f687ec38eca (vhost: /, messages: 0) 2026-04-11 06:26:24.294442 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume_fanout_0d92ec7a78154ad795698e2a0741842e (vhost: /, messages: 0) 2026-04-11 06:26:24.294459 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume_fanout_a440823c1d95433a9448777d7ac7bee5 (vhost: /, messages: 0) 2026-04-11 06:26:24.294470 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - cinder-volume_fanout_d0bbf618ef694e62bb87f3e5ba5ce415 (vhost: /, messages: 0) 2026-04-11 06:26:24.294482 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - compute (vhost: /, messages: 0) 2026-04-11 06:26:24.294727 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-04-11 06:26:24.294819 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-04-11 06:26:24.294831 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-04-11 06:26:24.294847 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - compute_fanout_c95a916145774daead02784141e962a8 (vhost: /, messages: 0) 2026-04-11 06:26:24.294858 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - compute_fanout_d75a6cefd4834befb1439a580bea90e3 (vhost: /, messages: 0) 2026-04-11 06:26:24.294869 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - compute_fanout_ebc42c26878449e59a740128e2b60e00 (vhost: /, messages: 0) 2026-04-11 06:26:24.295264 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor (vhost: /, messages: 0) 2026-04-11 06:26:24.295285 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.295301 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.295392 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.295406 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor_fanout_1597d198e4dc4a629755ab6a71bcd54a (vhost: /, messages: 0) 2026-04-11 06:26:24.295432 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor_fanout_2390109574c44d8d9a0d048d1981729c (vhost: /, messages: 0) 2026-04-11 06:26:24.295443 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor_fanout_34e2c584136342b8847b59b18de3aa97 (vhost: /, messages: 0) 2026-04-11 06:26:24.295865 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor_fanout_9ad1e3e262fc455a92811fa0b081dfda (vhost: /, messages: 0) 2026-04-11 06:26:24.295886 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - conductor_fanout_df7c236f75d94c2f94c82e176a5dc69a (vhost: /, messages: 0) 2026-04-11 06:26:24.295898 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - event.sample (vhost: /, messages: 5) 2026-04-11 06:26:24.295980 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-11 06:26:24.295993 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor.o324b5nldxgs (vhost: /, messages: 0) 2026-04-11 06:26:24.296004 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor.r6ouq6chauay (vhost: /, messages: 0) 2026-04-11 06:26:24.296020 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor.zveegrxdgu45 (vhost: /, messages: 0) 2026-04-11 06:26:24.296031 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_36aeb47ca98d40ca86774a1a253f6fbc (vhost: /, messages: 0) 2026-04-11 06:26:24.296042 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_529f02b99eaa4bf48a5a44e4fa79144a (vhost: /, messages: 0) 2026-04-11 06:26:24.296277 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_7056a329ee6e491e9084674a0b2a88da (vhost: /, messages: 0) 2026-04-11 06:26:24.296296 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_85f2fca774fc41d29c93a791ce2daf49 (vhost: /, messages: 0) 2026-04-11 06:26:24.296534 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_9022d045491f438098cb251c983400ef (vhost: /, messages: 0) 2026-04-11 06:26:24.296553 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_a4fc77b03dbe402abe0c4054ee018432 (vhost: /, messages: 0) 2026-04-11 06:26:24.296564 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_d70d8595e8524a0ca43ca8a3441152fe (vhost: /, messages: 0) 2026-04-11 06:26:24.296575 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_ea2c139c802d404b80bc37760845f7c4 (vhost: /, messages: 0) 2026-04-11 06:26:24.296586 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - magnum-conductor_fanout_f74f1fa9920e458aa7ed0976c72b19f6 (vhost: /, messages: 0) 2026-04-11 06:26:24.296741 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-04-11 06:26:24.296759 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.296770 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.296988 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.297005 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-data_fanout_3ba8d527dbed4dcb9e7b27ac4e3c239f (vhost: /, messages: 0) 2026-04-11 06:26:24.297015 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-data_fanout_d6322c130eae41a58dd024d91d04fd13 (vhost: /, messages: 0) 2026-04-11 06:26:24.297211 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-data_fanout_d78be345949441c086a42fbf5c3e4f7d (vhost: /, messages: 0) 2026-04-11 06:26:24.297228 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-04-11 06:26:24.297594 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.297611 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.297658 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.297670 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-scheduler_fanout_05fb304adc354abe8c574a16c71fe2f9 (vhost: /, messages: 0) 2026-04-11 06:26:24.297680 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-scheduler_fanout_4da690331784469a8e6e9c783752801d (vhost: /, messages: 0) 2026-04-11 06:26:24.297690 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-scheduler_fanout_82abbd856d93491ab9ebbb89023e42ac (vhost: /, messages: 0) 2026-04-11 06:26:24.297707 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-04-11 06:26:24.297997 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-04-11 06:26:24.298047 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-04-11 06:26:24.298060 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-04-11 06:26:24.298069 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-share_fanout_565c3c86aa52421bb3c8a41fbe690d14 (vhost: /, messages: 0) 2026-04-11 06:26:24.298282 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-share_fanout_a8f0cdd363e740f09e8fdc5f458ac48b (vhost: /, messages: 0) 2026-04-11 06:26:24.298299 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - manila-share_fanout_ec32c8e306434e819788b9c8e0f9f5a9 (vhost: /, messages: 0) 2026-04-11 06:26:24.298309 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-04-11 06:26:24.298319 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-04-11 06:26:24.298329 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-04-11 06:26:24.298570 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-04-11 06:26:24.298587 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-04-11 06:26:24.298597 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-04-11 06:26:24.298607 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-04-11 06:26:24.298750 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-04-11 06:26:24.298766 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.298776 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.298786 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.299410 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - octavia_provisioning_v2_fanout_67e8a05ea56e48a8ba8c11e4b0bc121b (vhost: /, messages: 0) 2026-04-11 06:26:24.299443 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - octavia_provisioning_v2_fanout_c39f35d4079147d994b60187096cb2d2 (vhost: /, messages: 0) 2026-04-11 06:26:24.299454 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - octavia_provisioning_v2_fanout_c98b82d7246e4dc88ad9112750c46d84 (vhost: /, messages: 0) 2026-04-11 06:26:24.299464 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer (vhost: /, messages: 0) 2026-04-11 06:26:24.299473 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.299496 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.299511 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.299521 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer_fanout_9f1a725d639f46888d90c57ecc9ee848 (vhost: /, messages: 0) 2026-04-11 06:26:24.299531 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer_fanout_a634e34ba9ea404faafe3bf5730b5ba5 (vhost: /, messages: 0) 2026-04-11 06:26:24.299540 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer_fanout_ba9194b5aec24ea29dafcc2879d00ec3 (vhost: /, messages: 0) 2026-04-11 06:26:24.299550 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer_fanout_c83cb3ee9e174b9c80c40eb19369966b (vhost: /, messages: 0) 2026-04-11 06:26:24.299847 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer_fanout_e0c68bf2cc374e548a0b18f5c7a93cec (vhost: /, messages: 0) 2026-04-11 06:26:24.299866 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - producer_fanout_ede84d5c924841349b5cfbbc0182dde4 (vhost: /, messages: 0) 2026-04-11 06:26:24.299876 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-04-11 06:26:24.299886 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.299896 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.299914 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.299990 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_19d26586bd7e4575bba31985fcaf7415 (vhost: /, messages: 0) 2026-04-11 06:26:24.300010 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_2a3f4948b43646658b1312ec7c0d0079 (vhost: /, messages: 0) 2026-04-11 06:26:24.300020 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_65d45f38ad274175b6d024c652f85d9b (vhost: /, messages: 0) 2026-04-11 06:26:24.300030 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_8c4a3147e6ea4960928e671812c7666e (vhost: /, messages: 0) 2026-04-11 06:26:24.300143 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_9fa9049d944b40ecadf99a98f950a63b (vhost: /, messages: 0) 2026-04-11 06:26:24.300465 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_a58106c4d97647f58980e33c67b1953e (vhost: /, messages: 0) 2026-04-11 06:26:24.300482 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_b9a7af863272495cb0a7c513c5c18396 (vhost: /, messages: 0) 2026-04-11 06:26:24.300492 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_bb4d38b83fbb4d268285e31f78024525 (vhost: /, messages: 0) 2026-04-11 06:26:24.300502 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-plugin_fanout_f534cc46df254a239a1395b146b3426c (vhost: /, messages: 0) 2026-04-11 06:26:24.300511 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-04-11 06:26:24.300521 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.300828 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.300855 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.300871 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_00fbc271867c4ef281e80738dce540e5 (vhost: /, messages: 0) 2026-04-11 06:26:24.300888 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_0790ff229ec04494b12e46d62920d588 (vhost: /, messages: 0) 2026-04-11 06:26:24.301163 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_11ddd62a93684debbb99fd227468814a (vhost: /, messages: 0) 2026-04-11 06:26:24.301182 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_1326fd19a49447efa8bad53ad10064fd (vhost: /, messages: 0) 2026-04-11 06:26:24.301192 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_19cb75d060d04464ac71700d6672dcb0 (vhost: /, messages: 0) 2026-04-11 06:26:24.301202 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_2d3e168203d44777b92c018ab2f48496 (vhost: /, messages: 0) 2026-04-11 06:26:24.301211 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_31ebe84004414c9d9ebbe715eb3f0c22 (vhost: /, messages: 0) 2026-04-11 06:26:24.301475 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_3bb4fd1c8d684f3b935ba8729367d5fe (vhost: /, messages: 0) 2026-04-11 06:26:24.301493 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_3f45d3354eae48c3b386fc9ff4c25bee (vhost: /, messages: 0) 2026-04-11 06:26:24.301503 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_5024ee08b82a40be9e97fa4b702722be (vhost: /, messages: 0) 2026-04-11 06:26:24.301514 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_5c27a37104884631833912b51bc01f06 (vhost: /, messages: 0) 2026-04-11 06:26:24.301615 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_78c94f6a52dc4c2abb005153936e4b45 (vhost: /, messages: 0) 2026-04-11 06:26:24.301627 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_ad65d6ba21e5488b9c1eda1ea9c71d61 (vhost: /, messages: 0) 2026-04-11 06:26:24.301659 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_b0ee164693304e3791740ce2e5f947f2 (vhost: /, messages: 0) 2026-04-11 06:26:24.301678 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_b17b255a08de440b8ada1cf282c60dc4 (vhost: /, messages: 0) 2026-04-11 06:26:24.301692 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_cca6ff111bc240b8b3a238d153452b28 (vhost: /, messages: 0) 2026-04-11 06:26:24.301713 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_d4e0b09034f44490b7439e5bd5569f95 (vhost: /, messages: 0) 2026-04-11 06:26:24.301811 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-reports-plugin_fanout_d628069b86b54e79a1d0edfbeb39d8a9 (vhost: /, messages: 0) 2026-04-11 06:26:24.301821 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-04-11 06:26:24.301955 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.301968 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.301976 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.302135 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_146780eb4ccd43588999b230a6230337 (vhost: /, messages: 0) 2026-04-11 06:26:24.302149 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_23a343fcb7c342b7941158cc0524ae56 (vhost: /, messages: 0) 2026-04-11 06:26:24.302257 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_3303412aeaad4133a5e54ec7c3fce39d (vhost: /, messages: 0) 2026-04-11 06:26:24.302269 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_4a16c738ee91451d8e271c696e497af5 (vhost: /, messages: 0) 2026-04-11 06:26:24.302289 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_514eefd3af444b9c918aae20719e3e97 (vhost: /, messages: 0) 2026-04-11 06:26:24.302297 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_8c954e72d6ff415291cb3f36f231c6bd (vhost: /, messages: 0) 2026-04-11 06:26:24.302610 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_cffb9437093e4cc3ab0732b40b0f06e4 (vhost: /, messages: 0) 2026-04-11 06:26:24.302624 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_d5c02f4061c74ae7a38d39e0748552b1 (vhost: /, messages: 0) 2026-04-11 06:26:24.302653 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - q-server-resource-versions_fanout_dfa4b3fdc68340dcb90ac00b75c586a9 (vhost: /, messages: 0) 2026-04-11 06:26:24.302662 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_0daaa29cc4d84169860c6ab45c4675e4 (vhost: /, messages: 0) 2026-04-11 06:26:24.302670 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_109447c1ca3a41f39d6e2ec21b80b484 (vhost: /, messages: 0) 2026-04-11 06:26:24.303085 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_11ea9882ffee40f590963ddf26a9c44f (vhost: /, messages: 0) 2026-04-11 06:26:24.303101 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_137b355907914c9895554c3f4146716f (vhost: /, messages: 0) 2026-04-11 06:26:24.303109 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_2668994006af415582d1d1328c311c9c (vhost: /, messages: 0) 2026-04-11 06:26:24.303117 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_2893e8cb0b96480fbb895a2af46ce3e0 (vhost: /, messages: 0) 2026-04-11 06:26:24.303125 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_390d12fe8ad24957a094a3eba3f3c352 (vhost: /, messages: 0) 2026-04-11 06:26:24.303133 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_48553b1e8785466f8e909a3e95ffe21c (vhost: /, messages: 0) 2026-04-11 06:26:24.303365 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_6afad6bf42944b7eb5d6b56bc3c1ab0d (vhost: /, messages: 0) 2026-04-11 06:26:24.303379 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_6f725d23bda84bfa88df9f360b1a8132 (vhost: /, messages: 0) 2026-04-11 06:26:24.303387 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_91cdc63b1929484fb335edd1f8b44de2 (vhost: /, messages: 0) 2026-04-11 06:26:24.303411 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_96169ce8fa5e4886bd823d1b54763cd7 (vhost: /, messages: 0) 2026-04-11 06:26:24.303419 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_af46f20d0fc8428aa5478fbcdf262250 (vhost: /, messages: 0) 2026-04-11 06:26:24.303427 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_b006677a74be419f839cc1da974143ed (vhost: /, messages: 0) 2026-04-11 06:26:24.303473 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_b5e9ff98e80948b2a422913c795918fb (vhost: /, messages: 0) 2026-04-11 06:26:24.303482 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_ba907eae6ca44c198deb2acc48c3ad98 (vhost: /, messages: 0) 2026-04-11 06:26:24.303497 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_bf598a8d3c8b4b9e93f6db775982a1a1 (vhost: /, messages: 0) 2026-04-11 06:26:24.303508 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - reply_ed74af6d33854ee69438efaa3cad3b17 (vhost: /, messages: 0) 2026-04-11 06:26:24.303517 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-04-11 06:26:24.303728 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.303742 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.303758 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.303765 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler_fanout_0ca66dba052b4b4494a3309ba121f355 (vhost: /, messages: 0) 2026-04-11 06:26:24.303878 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler_fanout_26463e7f3e7f424d9d7d1b0ce6bfc682 (vhost: /, messages: 0) 2026-04-11 06:26:24.303889 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler_fanout_2a9f0eb11ce742089afc6310605ad9da (vhost: /, messages: 0) 2026-04-11 06:26:24.303895 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler_fanout_31fe660b52af4bd6bc3e2edf0b424406 (vhost: /, messages: 0) 2026-04-11 06:26:24.304158 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler_fanout_afeb784a24194bd7afae65a73c381b09 (vhost: /, messages: 0) 2026-04-11 06:26:24.304174 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - scheduler_fanout_dd4e892ede4045059c9b5a1c87418925 (vhost: /, messages: 0) 2026-04-11 06:26:24.304280 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker (vhost: /, messages: 0) 2026-04-11 06:26:24.304291 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-04-11 06:26:24.304298 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-04-11 06:26:24.304395 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-04-11 06:26:24.304405 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker_fanout_2943ced9ed8d4bde94bb5573bdc84727 (vhost: /, messages: 0) 2026-04-11 06:26:24.304413 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker_fanout_4b11bf236f3f4748bfa634ff1a7e52f3 (vhost: /, messages: 0) 2026-04-11 06:26:24.304422 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker_fanout_658ad6cb2afe44c9a719dd869579764b (vhost: /, messages: 0) 2026-04-11 06:26:24.304604 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker_fanout_aa757db9dc4c4c029dfdb66c83157496 (vhost: /, messages: 0) 2026-04-11 06:26:24.304884 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker_fanout_cfd6b64dc79f43059f1605556fde5765 (vhost: /, messages: 0) 2026-04-11 06:26:24.304902 | orchestrator | 2026-04-11 06:26:24 | INFO  |  - worker_fanout_d884b62acd7949969177ebe0c5d70ec4 (vhost: /, messages: 0) 2026-04-11 06:26:24.554420 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-11 06:26:30.871893 | orchestrator | 2026-04-11 06:26:30 | ERROR  | Unable to get ansible vault password 2026-04-11 06:26:30.872005 | orchestrator | 2026-04-11 06:26:30 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-11 06:26:30.872021 | orchestrator | 2026-04-11 06:26:30 | ERROR  | Dropping encrypted entries 2026-04-11 06:26:30.905990 | orchestrator | 2026-04-11 06:26:30 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-11 06:26:30.927743 | orchestrator | 2026-04-11 06:26:30 | INFO  | Found 46 exchange(s) in vhost '/': 2026-04-11 06:26:30.927996 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - aodh (type: topic, transient) 2026-04-11 06:26:30.928015 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - barbican.workers_fanout (type: fanout, transient) 2026-04-11 06:26:30.928029 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - ceilometer (type: topic, transient) 2026-04-11 06:26:30.928051 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - central_fanout (type: fanout, transient) 2026-04-11 06:26:30.928089 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - cinder (type: topic, transient) 2026-04-11 06:26:30.928101 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - cinder-backup_fanout (type: fanout, transient) 2026-04-11 06:26:30.928112 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - cinder-scheduler_fanout (type: fanout, transient) 2026-04-11 06:26:30.928138 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout (type: fanout, transient) 2026-04-11 06:26:30.928151 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout (type: fanout, transient) 2026-04-11 06:26:30.928247 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout (type: fanout, transient) 2026-04-11 06:26:30.928261 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - cinder-volume_fanout (type: fanout, transient) 2026-04-11 06:26:30.928272 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - compute_fanout (type: fanout, transient) 2026-04-11 06:26:30.928283 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - conductor_fanout (type: fanout, transient) 2026-04-11 06:26:30.928294 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - designate (type: topic, transient) 2026-04-11 06:26:30.928305 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - dns (type: topic, transient) 2026-04-11 06:26:30.928316 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - glance (type: topic, transient) 2026-04-11 06:26:30.928327 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - heat (type: topic, transient) 2026-04-11 06:26:30.929765 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - ironic (type: topic, transient) 2026-04-11 06:26:30.929855 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - keystone (type: topic, transient) 2026-04-11 06:26:30.929881 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - l3_agent_fanout (type: fanout, transient) 2026-04-11 06:26:30.929906 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - magnum (type: topic, transient) 2026-04-11 06:26:30.929921 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - magnum-conductor_fanout (type: fanout, transient) 2026-04-11 06:26:30.929936 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - manila-data_fanout (type: fanout, transient) 2026-04-11 06:26:30.929952 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - manila-scheduler_fanout (type: fanout, transient) 2026-04-11 06:26:30.929968 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - manila-share_fanout (type: fanout, transient) 2026-04-11 06:26:30.929983 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - neutron (type: topic, transient) 2026-04-11 06:26:30.929998 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - neutron-vo-Network-1.1_fanout (type: fanout, transient) 2026-04-11 06:26:30.930068 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - neutron-vo-Port-1.10_fanout (type: fanout, transient) 2026-04-11 06:26:30.930084 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - neutron-vo-SecurityGroup-1.6_fanout (type: fanout, transient) 2026-04-11 06:26:30.930094 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - neutron-vo-SecurityGroupRule-1.3_fanout (type: fanout, transient) 2026-04-11 06:26:30.930103 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - neutron-vo-Subnet-1.2_fanout (type: fanout, transient) 2026-04-11 06:26:30.930113 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - nova (type: topic, transient) 2026-04-11 06:26:30.930123 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - octavia (type: topic, transient) 2026-04-11 06:26:30.930133 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - octavia_provisioning_v2_fanout (type: fanout, transient) 2026-04-11 06:26:30.930177 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - openstack (type: topic, transient) 2026-04-11 06:26:30.930188 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - producer_fanout (type: fanout, transient) 2026-04-11 06:26:30.930197 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - q-agent-notifier-port-update_fanout (type: fanout, transient) 2026-04-11 06:26:30.930208 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - q-agent-notifier-security_group-update_fanout (type: fanout, transient) 2026-04-11 06:26:30.930218 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - q-plugin_fanout (type: fanout, transient) 2026-04-11 06:26:30.930228 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - q-reports-plugin_fanout (type: fanout, transient) 2026-04-11 06:26:30.930237 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - q-server-resource-versions_fanout (type: fanout, transient) 2026-04-11 06:26:30.930247 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - scheduler_fanout (type: fanout, transient) 2026-04-11 06:26:30.930256 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - swift (type: topic, transient) 2026-04-11 06:26:30.930266 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - trove (type: topic, transient) 2026-04-11 06:26:30.930288 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - worker_fanout (type: fanout, transient) 2026-04-11 06:26:30.930299 | orchestrator | 2026-04-11 06:26:30 | INFO  |  - zaqar (type: topic, transient) 2026-04-11 06:26:31.168489 | orchestrator | + osism apply -a upgrade keystone 2026-04-11 06:26:32.461176 | orchestrator | 2026-04-11 06:26:32 | INFO  | Prepare task for execution of keystone. 2026-04-11 06:26:32.529063 | orchestrator | 2026-04-11 06:26:32 | INFO  | Task 435a47a7-d5c3-481d-b835-50f8e04aecd4 (keystone) was prepared for execution. 2026-04-11 06:26:32.529159 | orchestrator | 2026-04-11 06:26:32 | INFO  | It takes a moment until task 435a47a7-d5c3-481d-b835-50f8e04aecd4 (keystone) has been started and output is visible here. 2026-04-11 06:26:45.484140 | orchestrator | 2026-04-11 06:26:45.484271 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 06:26:45.484295 | orchestrator | 2026-04-11 06:26:45.484314 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 06:26:45.484331 | orchestrator | Saturday 11 April 2026 06:26:37 +0000 (0:00:01.526) 0:00:01.526 ******** 2026-04-11 06:26:45.484348 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:26:45.484366 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:26:45.484383 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:26:45.484402 | orchestrator | 2026-04-11 06:26:45.484445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 06:26:45.484467 | orchestrator | Saturday 11 April 2026 06:26:39 +0000 (0:00:01.920) 0:00:03.447 ******** 2026-04-11 06:26:45.484485 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-11 06:26:45.484504 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-11 06:26:45.484521 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-11 06:26:45.484539 | orchestrator | 2026-04-11 06:26:45.484557 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-11 06:26:45.484575 | orchestrator | 2026-04-11 06:26:45.484592 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 06:26:45.484610 | orchestrator | Saturday 11 April 2026 06:26:41 +0000 (0:00:01.692) 0:00:05.140 ******** 2026-04-11 06:26:45.484630 | orchestrator | included: /ansible/roles/keystone/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:26:45.484674 | orchestrator | 2026-04-11 06:26:45.484696 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-11 06:26:45.484715 | orchestrator | Saturday 11 April 2026 06:26:43 +0000 (0:00:02.088) 0:00:07.228 ******** 2026-04-11 06:26:45.484773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:26:45.484803 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:26:45.484843 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:26:45.484891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:26:45.484912 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:26:45.484944 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:26:45.484963 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:26:45.484981 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:26:45.485021 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:26:56.592403 | orchestrator | 2026-04-11 06:26:56.592548 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-11 06:26:56.592576 | orchestrator | Saturday 11 April 2026 06:26:46 +0000 (0:00:03.544) 0:00:10.773 ******** 2026-04-11 06:26:56.592598 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:26:56.592618 | orchestrator | 2026-04-11 06:26:56.592638 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-11 06:26:56.592651 | orchestrator | Saturday 11 April 2026 06:26:47 +0000 (0:00:01.120) 0:00:11.894 ******** 2026-04-11 06:26:56.592701 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:26:56.592723 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:26:56.592741 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:26:56.592758 | orchestrator | 2026-04-11 06:26:56.592777 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-11 06:26:56.592822 | orchestrator | Saturday 11 April 2026 06:26:49 +0000 (0:00:01.289) 0:00:13.184 ******** 2026-04-11 06:26:56.592841 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 06:26:56.592860 | orchestrator | 2026-04-11 06:26:56.592879 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 06:26:56.592898 | orchestrator | Saturday 11 April 2026 06:26:51 +0000 (0:00:02.177) 0:00:15.362 ******** 2026-04-11 06:26:56.592919 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:26:56.592938 | orchestrator | 2026-04-11 06:26:56.592958 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-11 06:26:56.592973 | orchestrator | Saturday 11 April 2026 06:26:53 +0000 (0:00:01.987) 0:00:17.349 ******** 2026-04-11 06:26:56.592992 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:26:56.593011 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:26:56.593064 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:26:56.593091 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:26:56.593105 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:26:56.593118 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:26:56.593131 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:26:56.593145 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:26:56.593163 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:26:56.593177 | orchestrator | 2026-04-11 06:26:56.593197 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-11 06:26:59.875243 | orchestrator | Saturday 11 April 2026 06:26:57 +0000 (0:00:04.175) 0:00:21.525 ******** 2026-04-11 06:26:59.875352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:26:59.875374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:26:59.875389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:26:59.875402 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:26:59.875433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:26:59.875466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:26:59.875520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:26:59.875544 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:26:59.875565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:26:59.875584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:26:59.875604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:26:59.875622 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:26:59.875640 | orchestrator | 2026-04-11 06:26:59.875660 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-11 06:26:59.875713 | orchestrator | Saturday 11 April 2026 06:26:59 +0000 (0:00:01.892) 0:00:23.418 ******** 2026-04-11 06:26:59.875758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:27:02.526794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:02.526935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:27:02.526961 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:27:02.526986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:27:02.527008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:02.527081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:27:02.527101 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:27:02.527147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:27:02.527167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:02.527186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:27:02.527204 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:27:02.527224 | orchestrator | 2026-04-11 06:27:02.527246 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-11 06:27:02.527268 | orchestrator | Saturday 11 April 2026 06:27:01 +0000 (0:00:01.669) 0:00:25.087 ******** 2026-04-11 06:27:02.527297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:02.527346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:08.446505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:08.446638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:27:08.446657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:27:08.447525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:27:08.447564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:08.447611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:08.447633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:08.447654 | orchestrator | 2026-04-11 06:27:08.447671 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-11 06:27:08.447712 | orchestrator | Saturday 11 April 2026 06:27:05 +0000 (0:00:04.273) 0:00:29.361 ******** 2026-04-11 06:27:08.447725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:08.447739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:08.447776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:08.447799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:19.257189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:19.257326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:19.257385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:19.257424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:19.257442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:19.257460 | orchestrator | 2026-04-11 06:27:19.257481 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-11 06:27:19.257501 | orchestrator | Saturday 11 April 2026 06:27:11 +0000 (0:00:06.560) 0:00:35.921 ******** 2026-04-11 06:27:19.257519 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:27:19.257537 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:27:19.257553 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:27:19.257570 | orchestrator | 2026-04-11 06:27:19.257587 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-11 06:27:19.257604 | orchestrator | Saturday 11 April 2026 06:27:14 +0000 (0:00:02.377) 0:00:38.298 ******** 2026-04-11 06:27:19.257620 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:27:19.257658 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:27:19.257676 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:27:19.257719 | orchestrator | 2026-04-11 06:27:19.257737 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-11 06:27:19.257753 | orchestrator | Saturday 11 April 2026 06:27:15 +0000 (0:00:01.604) 0:00:39.903 ******** 2026-04-11 06:27:19.257770 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:27:19.257787 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:27:19.257804 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:27:19.257821 | orchestrator | 2026-04-11 06:27:19.257837 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-11 06:27:19.257853 | orchestrator | Saturday 11 April 2026 06:27:17 +0000 (0:00:01.334) 0:00:41.238 ******** 2026-04-11 06:27:19.257869 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:27:19.257887 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:27:19.257904 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:27:19.257920 | orchestrator | 2026-04-11 06:27:19.257936 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-11 06:27:19.257953 | orchestrator | Saturday 11 April 2026 06:27:18 +0000 (0:00:01.553) 0:00:42.792 ******** 2026-04-11 06:27:19.257981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:27:19.258010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:19.258100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:27:19.258119 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:27:19.258150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:27:44.080769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:44.080920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:27:44.080950 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:27:44.080987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:27:44.081004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:27:44.081016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:27:44.081028 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:27:44.081039 | orchestrator | 2026-04-11 06:27:44.081052 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-11 06:27:44.081064 | orchestrator | Saturday 11 April 2026 06:27:20 +0000 (0:00:01.606) 0:00:44.399 ******** 2026-04-11 06:27:44.081075 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:27:44.081086 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:27:44.081096 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:27:44.081107 | orchestrator | 2026-04-11 06:27:44.081119 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-11 06:27:44.081148 | orchestrator | Saturday 11 April 2026 06:27:21 +0000 (0:00:01.334) 0:00:45.733 ******** 2026-04-11 06:27:44.081170 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 06:27:44.081181 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 06:27:44.081192 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-11 06:27:44.081203 | orchestrator | 2026-04-11 06:27:44.081214 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-11 06:27:44.081225 | orchestrator | Saturday 11 April 2026 06:27:24 +0000 (0:00:02.844) 0:00:48.578 ******** 2026-04-11 06:27:44.081236 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 06:27:44.081246 | orchestrator | 2026-04-11 06:27:44.081258 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-11 06:27:44.081271 | orchestrator | Saturday 11 April 2026 06:27:26 +0000 (0:00:01.976) 0:00:50.554 ******** 2026-04-11 06:27:44.081285 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:27:44.081298 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:27:44.081311 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:27:44.081323 | orchestrator | 2026-04-11 06:27:44.081336 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-11 06:27:44.081349 | orchestrator | Saturday 11 April 2026 06:27:28 +0000 (0:00:01.573) 0:00:52.128 ******** 2026-04-11 06:27:44.081362 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 06:27:44.081375 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 06:27:44.081388 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 06:27:44.081400 | orchestrator | 2026-04-11 06:27:44.081413 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-11 06:27:44.081426 | orchestrator | Saturday 11 April 2026 06:27:30 +0000 (0:00:02.213) 0:00:54.342 ******** 2026-04-11 06:27:44.081439 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:27:44.081452 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:27:44.081465 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:27:44.081478 | orchestrator | 2026-04-11 06:27:44.081490 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-11 06:27:44.081503 | orchestrator | Saturday 11 April 2026 06:27:31 +0000 (0:00:01.317) 0:00:55.659 ******** 2026-04-11 06:27:44.081516 | orchestrator | ok: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 06:27:44.081529 | orchestrator | ok: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 06:27:44.081541 | orchestrator | ok: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-11 06:27:44.081554 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 06:27:44.081568 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 06:27:44.081580 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-11 06:27:44.081591 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 06:27:44.081607 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 06:27:44.081618 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-11 06:27:44.081629 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 06:27:44.081639 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 06:27:44.081651 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-11 06:27:44.081662 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 06:27:44.081672 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 06:27:44.081690 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-11 06:27:44.081700 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:27:44.081740 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:27:44.081753 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:27:44.081764 | orchestrator | ok: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:27:44.081774 | orchestrator | ok: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:27:44.081785 | orchestrator | ok: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:27:44.081796 | orchestrator | 2026-04-11 06:27:44.081807 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-11 06:27:44.081818 | orchestrator | Saturday 11 April 2026 06:27:41 +0000 (0:00:09.839) 0:01:05.499 ******** 2026-04-11 06:27:44.081829 | orchestrator | ok: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:27:44.081839 | orchestrator | ok: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:27:44.081850 | orchestrator | ok: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:27:44.081861 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:27:44.081878 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:27:51.522325 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:27:51.522430 | orchestrator | 2026-04-11 06:27:51.522446 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-11 06:27:51.522458 | orchestrator | Saturday 11 April 2026 06:27:45 +0000 (0:00:04.081) 0:01:09.581 ******** 2026-04-11 06:27:51.522474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:51.522508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:51.522543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-11 06:27:51.522574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:27:51.522587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:27:51.522597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-11 06:27:51.522607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:51.522624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:51.522642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-11 06:27:51.522652 | orchestrator | 2026-04-11 06:27:51.522663 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-11 06:27:51.522673 | orchestrator | Saturday 11 April 2026 06:27:49 +0000 (0:00:04.178) 0:01:13.760 ******** 2026-04-11 06:27:51.522683 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 06:27:51.522693 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:27:51.522703 | orchestrator | } 2026-04-11 06:27:51.522713 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 06:27:51.522771 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:27:51.522782 | orchestrator | } 2026-04-11 06:27:51.522791 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 06:27:51.522800 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:27:51.522810 | orchestrator | } 2026-04-11 06:27:51.522819 | orchestrator | 2026-04-11 06:27:51.522829 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 06:27:51.522839 | orchestrator | Saturday 11 April 2026 06:27:51 +0000 (0:00:01.398) 0:01:15.159 ******** 2026-04-11 06:27:51.522859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:30:11.025003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:30:11.025107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:30:11.025148 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:30:11.025177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:30:11.025193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:30:11.025205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:30:11.025216 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:30:11.025246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-11 06:30:11.025267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-11 06:30:11.025283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-11 06:30:11.025294 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:30:11.025306 | orchestrator | 2026-04-11 06:30:11.025317 | orchestrator | TASK [keystone : Enable log_bin_trust_function_creators function] ************** 2026-04-11 06:30:11.025329 | orchestrator | Saturday 11 April 2026 06:27:53 +0000 (0:00:02.076) 0:01:17.236 ******** 2026-04-11 06:30:11.025340 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:30:11.025351 | orchestrator | 2026-04-11 06:30:11.025361 | orchestrator | TASK [keystone : Init keystone database upgrade] ******************************* 2026-04-11 06:30:11.025372 | orchestrator | Saturday 11 April 2026 06:27:56 +0000 (0:00:03.067) 0:01:20.304 ******** 2026-04-11 06:30:11.025383 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:30:11.025394 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:30:11.025404 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:30:11.025415 | orchestrator | 2026-04-11 06:30:11.025426 | orchestrator | TASK [keystone : Finish keystone database upgrade] ***************************** 2026-04-11 06:30:11.025437 | orchestrator | Saturday 11 April 2026 06:27:57 +0000 (0:00:01.446) 0:01:21.751 ******** 2026-04-11 06:30:11.025448 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:30:11.025458 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:30:11.025469 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:30:11.025480 | orchestrator | 2026-04-11 06:30:11.025490 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-11 06:30:11.025503 | orchestrator | Saturday 11 April 2026 06:27:59 +0000 (0:00:01.969) 0:01:23.721 ******** 2026-04-11 06:30:11.025516 | orchestrator | 2026-04-11 06:30:11.025528 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-11 06:30:11.025541 | orchestrator | Saturday 11 April 2026 06:28:00 +0000 (0:00:00.458) 0:01:24.179 ******** 2026-04-11 06:30:11.025553 | orchestrator | 2026-04-11 06:30:11.025566 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-11 06:30:11.025578 | orchestrator | Saturday 11 April 2026 06:28:00 +0000 (0:00:00.461) 0:01:24.640 ******** 2026-04-11 06:30:11.025591 | orchestrator | 2026-04-11 06:30:11.025604 | orchestrator | RUNNING HANDLER [keystone : Init keystone database upgrade] ******************** 2026-04-11 06:30:11.025617 | orchestrator | Saturday 11 April 2026 06:28:01 +0000 (0:00:00.799) 0:01:25.440 ******** 2026-04-11 06:30:11.025629 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:30:11.025641 | orchestrator | 2026-04-11 06:30:11.025653 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-11 06:30:11.025665 | orchestrator | Saturday 11 April 2026 06:29:03 +0000 (0:01:02.255) 0:02:27.695 ******** 2026-04-11 06:30:11.025683 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:30:11.025696 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:30:11.025708 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:30:11.025721 | orchestrator | 2026-04-11 06:30:11.025733 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-11 06:30:11.025746 | orchestrator | Saturday 11 April 2026 06:29:57 +0000 (0:00:54.167) 0:03:21.863 ******** 2026-04-11 06:30:11.025758 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:30:11.025771 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:30:11.025786 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:30:11.025805 | orchestrator | 2026-04-11 06:30:11.025825 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-11 06:30:11.025904 | orchestrator | Saturday 11 April 2026 06:30:11 +0000 (0:00:13.095) 0:03:34.958 ******** 2026-04-11 06:30:41.406590 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:30:41.406706 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:30:41.406721 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:30:41.406733 | orchestrator | 2026-04-11 06:30:41.406745 | orchestrator | RUNNING HANDLER [keystone : Finish keystone database upgrade] ****************** 2026-04-11 06:30:41.406757 | orchestrator | Saturday 11 April 2026 06:30:24 +0000 (0:00:13.827) 0:03:48.786 ******** 2026-04-11 06:30:41.406768 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:30:41.406779 | orchestrator | 2026-04-11 06:30:41.406790 | orchestrator | TASK [keystone : Disable log_bin_trust_function_creators function] ************* 2026-04-11 06:30:41.406801 | orchestrator | Saturday 11 April 2026 06:30:37 +0000 (0:00:12.547) 0:04:01.333 ******** 2026-04-11 06:30:41.406812 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:30:41.406823 | orchestrator | 2026-04-11 06:30:41.406834 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 06:30:41.406846 | orchestrator | testbed-node-0 : ok=25  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-11 06:30:41.406859 | orchestrator | testbed-node-1 : ok=19  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-11 06:30:41.406944 | orchestrator | testbed-node-2 : ok=21  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-11 06:30:41.406955 | orchestrator | 2026-04-11 06:30:41.406966 | orchestrator | 2026-04-11 06:30:41.406977 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 06:30:41.406987 | orchestrator | Saturday 11 April 2026 06:30:41 +0000 (0:00:03.624) 0:04:04.958 ******** 2026-04-11 06:30:41.406998 | orchestrator | =============================================================================== 2026-04-11 06:30:41.407027 | orchestrator | keystone : Init keystone database upgrade ------------------------------ 62.25s 2026-04-11 06:30:41.407039 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 54.17s 2026-04-11 06:30:41.407050 | orchestrator | keystone : Restart keystone container ---------------------------------- 13.83s 2026-04-11 06:30:41.407060 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 13.10s 2026-04-11 06:30:41.407071 | orchestrator | keystone : Finish keystone database upgrade ---------------------------- 12.55s 2026-04-11 06:30:41.407082 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.84s 2026-04-11 06:30:41.407093 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.56s 2026-04-11 06:30:41.407103 | orchestrator | keystone : Copying over config.json files for services ------------------ 4.27s 2026-04-11 06:30:41.407114 | orchestrator | service-check-containers : keystone | Check containers ------------------ 4.18s 2026-04-11 06:30:41.407125 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 4.18s 2026-04-11 06:30:41.407138 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 4.08s 2026-04-11 06:30:41.407174 | orchestrator | keystone : Disable log_bin_trust_function_creators function ------------- 3.62s 2026-04-11 06:30:41.407187 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 3.54s 2026-04-11 06:30:41.407199 | orchestrator | keystone : Enable log_bin_trust_function_creators function -------------- 3.07s 2026-04-11 06:30:41.407212 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.84s 2026-04-11 06:30:41.407224 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.38s 2026-04-11 06:30:41.407237 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 2.21s 2026-04-11 06:30:41.407249 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 2.18s 2026-04-11 06:30:41.407262 | orchestrator | keystone : include_tasks ------------------------------------------------ 2.09s 2026-04-11 06:30:41.407274 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.08s 2026-04-11 06:30:41.585319 | orchestrator | + osism apply -a upgrade placement 2026-04-11 06:30:42.866432 | orchestrator | 2026-04-11 06:30:42 | INFO  | Prepare task for execution of placement. 2026-04-11 06:30:42.932272 | orchestrator | 2026-04-11 06:30:42 | INFO  | Task 8c2812f2-db8a-4f2e-b57a-9a516b33a307 (placement) was prepared for execution. 2026-04-11 06:30:42.932420 | orchestrator | 2026-04-11 06:30:42 | INFO  | It takes a moment until task 8c2812f2-db8a-4f2e-b57a-9a516b33a307 (placement) has been started and output is visible here. 2026-04-11 06:31:37.970239 | orchestrator | 2026-04-11 06:31:37.970358 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 06:31:37.970375 | orchestrator | 2026-04-11 06:31:37.970387 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 06:31:37.970399 | orchestrator | Saturday 11 April 2026 06:30:48 +0000 (0:00:01.860) 0:00:01.860 ******** 2026-04-11 06:31:37.970410 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:31:37.970422 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:31:37.970433 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:31:37.970444 | orchestrator | 2026-04-11 06:31:37.970456 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 06:31:37.970468 | orchestrator | Saturday 11 April 2026 06:30:49 +0000 (0:00:01.766) 0:00:03.627 ******** 2026-04-11 06:31:37.970488 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-11 06:31:37.970507 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-11 06:31:37.970524 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-11 06:31:37.970542 | orchestrator | 2026-04-11 06:31:37.970561 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-11 06:31:37.970581 | orchestrator | 2026-04-11 06:31:37.970602 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-11 06:31:37.970621 | orchestrator | Saturday 11 April 2026 06:30:51 +0000 (0:00:01.938) 0:00:05.566 ******** 2026-04-11 06:31:37.970639 | orchestrator | included: /ansible/roles/placement/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:31:37.970651 | orchestrator | 2026-04-11 06:31:37.970663 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-11 06:31:37.970674 | orchestrator | Saturday 11 April 2026 06:30:56 +0000 (0:00:04.204) 0:00:09.771 ******** 2026-04-11 06:31:37.970685 | orchestrator | ok: [testbed-node-0] => (item=placement (placement)) 2026-04-11 06:31:37.970696 | orchestrator | 2026-04-11 06:31:37.970707 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-11 06:31:37.970717 | orchestrator | Saturday 11 April 2026 06:31:01 +0000 (0:00:05.010) 0:00:14.781 ******** 2026-04-11 06:31:37.970728 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-11 06:31:37.970741 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-11 06:31:37.970751 | orchestrator | 2026-04-11 06:31:37.970786 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-11 06:31:37.970797 | orchestrator | Saturday 11 April 2026 06:31:08 +0000 (0:00:07.519) 0:00:22.301 ******** 2026-04-11 06:31:37.970808 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 06:31:37.970819 | orchestrator | 2026-04-11 06:31:37.970845 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-11 06:31:37.970856 | orchestrator | Saturday 11 April 2026 06:31:12 +0000 (0:00:04.238) 0:00:26.540 ******** 2026-04-11 06:31:37.970885 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-11 06:31:37.970927 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 06:31:37.970948 | orchestrator | 2026-04-11 06:31:37.970965 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-11 06:31:37.970983 | orchestrator | Saturday 11 April 2026 06:31:19 +0000 (0:00:06.471) 0:00:33.012 ******** 2026-04-11 06:31:37.971000 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 06:31:37.971018 | orchestrator | 2026-04-11 06:31:37.971036 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-11 06:31:37.971053 | orchestrator | Saturday 11 April 2026 06:31:23 +0000 (0:00:04.179) 0:00:37.191 ******** 2026-04-11 06:31:37.971073 | orchestrator | ok: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-11 06:31:37.971091 | orchestrator | 2026-04-11 06:31:37.971109 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-11 06:31:37.971128 | orchestrator | Saturday 11 April 2026 06:31:28 +0000 (0:00:04.991) 0:00:42.183 ******** 2026-04-11 06:31:37.971147 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:31:37.971165 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:31:37.971183 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:31:37.971201 | orchestrator | 2026-04-11 06:31:37.971221 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-11 06:31:37.971240 | orchestrator | Saturday 11 April 2026 06:31:30 +0000 (0:00:01.742) 0:00:43.925 ******** 2026-04-11 06:31:37.971295 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:37.971314 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:37.971347 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:37.971360 | orchestrator | 2026-04-11 06:31:37.971371 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-11 06:31:37.971382 | orchestrator | Saturday 11 April 2026 06:31:32 +0000 (0:00:02.160) 0:00:46.085 ******** 2026-04-11 06:31:37.971393 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:31:37.971404 | orchestrator | 2026-04-11 06:31:37.971431 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-11 06:31:37.971453 | orchestrator | Saturday 11 April 2026 06:31:33 +0000 (0:00:01.113) 0:00:47.199 ******** 2026-04-11 06:31:37.971465 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:31:37.971475 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:31:37.971486 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:31:37.971497 | orchestrator | 2026-04-11 06:31:37.971507 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-11 06:31:37.971518 | orchestrator | Saturday 11 April 2026 06:31:34 +0000 (0:00:01.367) 0:00:48.567 ******** 2026-04-11 06:31:37.971529 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:31:37.971546 | orchestrator | 2026-04-11 06:31:37.971565 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-11 06:31:37.971583 | orchestrator | Saturday 11 April 2026 06:31:36 +0000 (0:00:01.924) 0:00:50.492 ******** 2026-04-11 06:31:37.971616 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:41.422394 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:41.422534 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:41.422556 | orchestrator | 2026-04-11 06:31:41.422570 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-11 06:31:41.422584 | orchestrator | Saturday 11 April 2026 06:31:39 +0000 (0:00:02.415) 0:00:52.907 ******** 2026-04-11 06:31:41.422598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:31:41.422612 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:31:41.422645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:31:41.422666 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:31:41.422679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:31:41.422693 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:31:41.422705 | orchestrator | 2026-04-11 06:31:41.422717 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-11 06:31:41.422729 | orchestrator | Saturday 11 April 2026 06:31:40 +0000 (0:00:01.755) 0:00:54.663 ******** 2026-04-11 06:31:41.422747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:31:41.422760 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:31:41.422773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:31:41.422786 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:31:41.422808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:31:56.595699 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:31:56.595833 | orchestrator | 2026-04-11 06:31:56.595863 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-11 06:31:56.595886 | orchestrator | Saturday 11 April 2026 06:31:42 +0000 (0:00:01.496) 0:00:56.160 ******** 2026-04-11 06:31:56.595997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:56.596034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:56.596059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:56.596109 | orchestrator | 2026-04-11 06:31:56.596134 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-11 06:31:56.596177 | orchestrator | Saturday 11 April 2026 06:31:45 +0000 (0:00:02.551) 0:00:58.711 ******** 2026-04-11 06:31:56.596226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:56.596261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:56.596286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:31:56.596308 | orchestrator | 2026-04-11 06:31:56.596342 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-11 06:31:56.596363 | orchestrator | Saturday 11 April 2026 06:31:48 +0000 (0:00:03.721) 0:01:02.433 ******** 2026-04-11 06:31:56.596384 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-11 06:31:56.596405 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:31:56.596426 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-11 06:31:56.596448 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:31:56.596469 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-11 06:31:56.596488 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:31:56.596509 | orchestrator | 2026-04-11 06:31:56.596527 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-11 06:31:56.596547 | orchestrator | Saturday 11 April 2026 06:31:50 +0000 (0:00:01.592) 0:01:04.025 ******** 2026-04-11 06:31:56.596567 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:31:56.596585 | orchestrator | 2026-04-11 06:31:56.596602 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-11 06:31:56.596621 | orchestrator | Saturday 11 April 2026 06:31:52 +0000 (0:00:01.862) 0:01:05.888 ******** 2026-04-11 06:31:56.596638 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:31:56.596656 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:31:56.596675 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:31:56.596693 | orchestrator | 2026-04-11 06:31:56.596711 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-11 06:31:56.596728 | orchestrator | Saturday 11 April 2026 06:31:55 +0000 (0:00:02.911) 0:01:08.799 ******** 2026-04-11 06:31:56.596747 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:31:56.596765 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:31:56.596784 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:31:56.596802 | orchestrator | 2026-04-11 06:31:56.596835 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-11 06:32:03.719528 | orchestrator | Saturday 11 April 2026 06:31:57 +0000 (0:00:02.461) 0:01:11.261 ******** 2026-04-11 06:32:03.719643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:32:03.719665 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:32:03.719694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:32:03.719728 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:32:03.719739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:32:03.719749 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:32:03.719759 | orchestrator | 2026-04-11 06:32:03.719769 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-11 06:32:03.719779 | orchestrator | Saturday 11 April 2026 06:31:59 +0000 (0:00:02.241) 0:01:13.502 ******** 2026-04-11 06:32:03.719808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:32:03.719826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:32:03.719845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 06:32:03.719856 | orchestrator | 2026-04-11 06:32:03.719866 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-11 06:32:03.719876 | orchestrator | Saturday 11 April 2026 06:32:02 +0000 (0:00:02.338) 0:01:15.841 ******** 2026-04-11 06:32:03.719886 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 06:32:03.719896 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:32:03.719906 | orchestrator | } 2026-04-11 06:32:03.719916 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 06:32:03.719959 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:32:03.719970 | orchestrator | } 2026-04-11 06:32:03.719980 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 06:32:03.719989 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:32:03.719999 | orchestrator | } 2026-04-11 06:32:03.720008 | orchestrator | 2026-04-11 06:32:03.720018 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 06:32:03.720027 | orchestrator | Saturday 11 April 2026 06:32:03 +0000 (0:00:01.327) 0:01:17.168 ******** 2026-04-11 06:32:03.720046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:32:53.574833 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:32:53.574949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:32:53.575035 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:32:53.575044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 06:32:53.575050 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:32:53.575057 | orchestrator | 2026-04-11 06:32:53.575064 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-11 06:32:53.575072 | orchestrator | Saturday 11 April 2026 06:32:05 +0000 (0:00:02.182) 0:01:19.350 ******** 2026-04-11 06:32:53.575078 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:32:53.575086 | orchestrator | 2026-04-11 06:32:53.575092 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-11 06:32:53.575098 | orchestrator | Saturday 11 April 2026 06:32:08 +0000 (0:00:03.052) 0:01:22.402 ******** 2026-04-11 06:32:53.575105 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:32:53.575111 | orchestrator | 2026-04-11 06:32:53.575117 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-11 06:32:53.575123 | orchestrator | Saturday 11 April 2026 06:32:12 +0000 (0:00:03.510) 0:01:25.913 ******** 2026-04-11 06:32:53.575130 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:32:53.575136 | orchestrator | 2026-04-11 06:32:53.575142 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-11 06:32:53.575149 | orchestrator | Saturday 11 April 2026 06:32:27 +0000 (0:00:15.153) 0:01:41.066 ******** 2026-04-11 06:32:53.575155 | orchestrator | 2026-04-11 06:32:53.575161 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-11 06:32:53.575167 | orchestrator | Saturday 11 April 2026 06:32:27 +0000 (0:00:00.454) 0:01:41.521 ******** 2026-04-11 06:32:53.575173 | orchestrator | 2026-04-11 06:32:53.575180 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-11 06:32:53.575186 | orchestrator | Saturday 11 April 2026 06:32:28 +0000 (0:00:00.479) 0:01:42.000 ******** 2026-04-11 06:32:53.575192 | orchestrator | 2026-04-11 06:32:53.575198 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-11 06:32:53.575204 | orchestrator | Saturday 11 April 2026 06:32:29 +0000 (0:00:00.816) 0:01:42.817 ******** 2026-04-11 06:32:53.575209 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:32:53.575215 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:32:53.575221 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:32:53.575227 | orchestrator | 2026-04-11 06:32:53.575233 | orchestrator | TASK [placement : Perform Placement online data migration] ********************* 2026-04-11 06:32:53.575239 | orchestrator | Saturday 11 April 2026 06:32:41 +0000 (0:00:12.657) 0:01:55.474 ******** 2026-04-11 06:32:53.575245 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:32:53.575251 | orchestrator | 2026-04-11 06:32:53.575257 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 06:32:53.575269 | orchestrator | testbed-node-0 : ok=24  changed=9  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-11 06:32:53.575288 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 06:32:53.575295 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 06:32:53.575301 | orchestrator | 2026-04-11 06:32:53.575306 | orchestrator | 2026-04-11 06:32:53.575312 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 06:32:53.575319 | orchestrator | Saturday 11 April 2026 06:32:53 +0000 (0:00:11.473) 0:02:06.947 ******** 2026-04-11 06:32:53.575325 | orchestrator | =============================================================================== 2026-04-11 06:32:53.575331 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.15s 2026-04-11 06:32:53.575338 | orchestrator | placement : Restart placement-api container ---------------------------- 12.66s 2026-04-11 06:32:53.575344 | orchestrator | placement : Perform Placement online data migration -------------------- 11.47s 2026-04-11 06:32:53.575350 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 7.52s 2026-04-11 06:32:53.575357 | orchestrator | service-ks-register : placement | Creating users ------------------------ 6.47s 2026-04-11 06:32:53.575367 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 5.01s 2026-04-11 06:32:53.575373 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 4.99s 2026-04-11 06:32:53.575380 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.24s 2026-04-11 06:32:53.575386 | orchestrator | placement : include_tasks ----------------------------------------------- 4.21s 2026-04-11 06:32:53.575393 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.18s 2026-04-11 06:32:53.575400 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.72s 2026-04-11 06:32:53.575407 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.51s 2026-04-11 06:32:53.575414 | orchestrator | placement : Creating placement databases -------------------------------- 3.05s 2026-04-11 06:32:53.575421 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.91s 2026-04-11 06:32:53.575428 | orchestrator | placement : Copying over config.json files for services ----------------- 2.55s 2026-04-11 06:32:53.575435 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.46s 2026-04-11 06:32:53.575442 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.42s 2026-04-11 06:32:53.575448 | orchestrator | service-check-containers : placement | Check containers ----------------- 2.34s 2026-04-11 06:32:53.575455 | orchestrator | placement : Copying over existing policy file --------------------------- 2.24s 2026-04-11 06:32:53.575462 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.18s 2026-04-11 06:32:53.765310 | orchestrator | + osism apply -a upgrade neutron 2026-04-11 06:32:55.060930 | orchestrator | 2026-04-11 06:32:55 | INFO  | Prepare task for execution of neutron. 2026-04-11 06:32:55.127568 | orchestrator | 2026-04-11 06:32:55 | INFO  | Task 528c4bb1-d86b-450c-a8fa-58b230063103 (neutron) was prepared for execution. 2026-04-11 06:32:55.127667 | orchestrator | 2026-04-11 06:32:55 | INFO  | It takes a moment until task 528c4bb1-d86b-450c-a8fa-58b230063103 (neutron) has been started and output is visible here. 2026-04-11 06:33:35.277103 | orchestrator | 2026-04-11 06:33:35.277204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 06:33:35.277222 | orchestrator | 2026-04-11 06:33:35.277236 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 06:33:35.277247 | orchestrator | Saturday 11 April 2026 06:33:00 +0000 (0:00:01.703) 0:00:01.703 ******** 2026-04-11 06:33:35.277279 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:33:35.277292 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:33:35.277303 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:33:35.277313 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:33:35.277324 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:33:35.277335 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:33:35.277346 | orchestrator | 2026-04-11 06:33:35.277357 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 06:33:35.277368 | orchestrator | Saturday 11 April 2026 06:33:02 +0000 (0:00:02.455) 0:00:04.159 ******** 2026-04-11 06:33:35.277379 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-11 06:33:35.277390 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-11 06:33:35.277400 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-11 06:33:35.277411 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-11 06:33:35.277422 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-11 06:33:35.277433 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-11 06:33:35.277444 | orchestrator | 2026-04-11 06:33:35.277455 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-11 06:33:35.277465 | orchestrator | 2026-04-11 06:33:35.277476 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 06:33:35.277487 | orchestrator | Saturday 11 April 2026 06:33:04 +0000 (0:00:02.093) 0:00:06.252 ******** 2026-04-11 06:33:35.277499 | orchestrator | included: /ansible/roles/neutron/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:33:35.277510 | orchestrator | 2026-04-11 06:33:35.277521 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-11 06:33:35.277531 | orchestrator | Saturday 11 April 2026 06:33:08 +0000 (0:00:03.622) 0:00:09.874 ******** 2026-04-11 06:33:35.277542 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:33:35.277553 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:33:35.277564 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:33:35.277575 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:33:35.277585 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:33:35.277596 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:33:35.277607 | orchestrator | 2026-04-11 06:33:35.277621 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-11 06:33:35.277648 | orchestrator | Saturday 11 April 2026 06:33:11 +0000 (0:00:03.562) 0:00:13.437 ******** 2026-04-11 06:33:35.277661 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:33:35.277683 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:33:35.277696 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:33:35.277708 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:33:35.277720 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:33:35.277732 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:33:35.277744 | orchestrator | 2026-04-11 06:33:35.277756 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-11 06:33:35.277769 | orchestrator | Saturday 11 April 2026 06:33:14 +0000 (0:00:02.533) 0:00:15.970 ******** 2026-04-11 06:33:35.277782 | orchestrator | ok: [testbed-node-0] => { 2026-04-11 06:33:35.277795 | orchestrator |  "changed": false, 2026-04-11 06:33:35.277808 | orchestrator |  "msg": "All assertions passed" 2026-04-11 06:33:35.277821 | orchestrator | } 2026-04-11 06:33:35.277834 | orchestrator | ok: [testbed-node-1] => { 2026-04-11 06:33:35.277846 | orchestrator |  "changed": false, 2026-04-11 06:33:35.277858 | orchestrator |  "msg": "All assertions passed" 2026-04-11 06:33:35.277870 | orchestrator | } 2026-04-11 06:33:35.277893 | orchestrator | ok: [testbed-node-2] => { 2026-04-11 06:33:35.277906 | orchestrator |  "changed": false, 2026-04-11 06:33:35.277919 | orchestrator |  "msg": "All assertions passed" 2026-04-11 06:33:35.277931 | orchestrator | } 2026-04-11 06:33:35.277944 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 06:33:35.277964 | orchestrator |  "changed": false, 2026-04-11 06:33:35.277975 | orchestrator |  "msg": "All assertions passed" 2026-04-11 06:33:35.277986 | orchestrator | } 2026-04-11 06:33:35.278067 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 06:33:35.278080 | orchestrator |  "changed": false, 2026-04-11 06:33:35.278091 | orchestrator |  "msg": "All assertions passed" 2026-04-11 06:33:35.278102 | orchestrator | } 2026-04-11 06:33:35.278113 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 06:33:35.278124 | orchestrator |  "changed": false, 2026-04-11 06:33:35.278135 | orchestrator |  "msg": "All assertions passed" 2026-04-11 06:33:35.278146 | orchestrator | } 2026-04-11 06:33:35.278157 | orchestrator | 2026-04-11 06:33:35.278168 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-11 06:33:35.278179 | orchestrator | Saturday 11 April 2026 06:33:16 +0000 (0:00:01.977) 0:00:17.948 ******** 2026-04-11 06:33:35.278190 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:33:35.278201 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:33:35.278212 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:33:35.278223 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:33:35.278234 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:33:35.278245 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:33:35.278256 | orchestrator | 2026-04-11 06:33:35.278267 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 06:33:35.278278 | orchestrator | Saturday 11 April 2026 06:33:19 +0000 (0:00:02.769) 0:00:20.717 ******** 2026-04-11 06:33:35.278289 | orchestrator | included: /ansible/roles/neutron/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:33:35.278301 | orchestrator | 2026-04-11 06:33:35.278312 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-11 06:33:35.278323 | orchestrator | Saturday 11 April 2026 06:33:21 +0000 (0:00:02.786) 0:00:23.504 ******** 2026-04-11 06:33:35.278334 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:33:35.278345 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:33:35.278356 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:33:35.278367 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:33:35.278394 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:33:35.278406 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:33:35.278417 | orchestrator | 2026-04-11 06:33:35.278428 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-11 06:33:35.278439 | orchestrator | Saturday 11 April 2026 06:33:26 +0000 (0:00:04.145) 0:00:27.649 ******** 2026-04-11 06:33:35.278450 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:33:35.278461 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:33:35.278472 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:33:35.278483 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:33:35.278494 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:33:35.278505 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:33:35.278515 | orchestrator | 2026-04-11 06:33:35.278526 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-11 06:33:35.278537 | orchestrator | Saturday 11 April 2026 06:33:28 +0000 (0:00:01.990) 0:00:29.640 ******** 2026-04-11 06:33:35.278548 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:33:35.278559 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:33:35.278570 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:33:35.278581 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:33:35.278592 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:33:35.278603 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:33:35.278613 | orchestrator | 2026-04-11 06:33:35.278625 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-11 06:33:35.278636 | orchestrator | Saturday 11 April 2026 06:33:32 +0000 (0:00:04.811) 0:00:34.451 ******** 2026-04-11 06:33:35.278652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:33:35.278684 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:33:35.278699 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:33:35.278721 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:33:44.822127 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:33:44.822245 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:33:44.822260 | orchestrator | 2026-04-11 06:33:44.822268 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-11 06:33:44.822277 | orchestrator | Saturday 11 April 2026 06:33:36 +0000 (0:00:03.893) 0:00:38.345 ******** 2026-04-11 06:33:44.822284 | orchestrator | [WARNING]: Skipped 2026-04-11 06:33:44.822305 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-11 06:33:44.822313 | orchestrator | due to this access issue: 2026-04-11 06:33:44.822321 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-11 06:33:44.822328 | orchestrator | a directory 2026-04-11 06:33:44.822335 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 06:33:44.822342 | orchestrator | 2026-04-11 06:33:44.822349 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 06:33:44.822356 | orchestrator | Saturday 11 April 2026 06:33:39 +0000 (0:00:02.228) 0:00:40.573 ******** 2026-04-11 06:33:44.822364 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:33:44.822373 | orchestrator | 2026-04-11 06:33:44.822379 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-11 06:33:44.822386 | orchestrator | Saturday 11 April 2026 06:33:41 +0000 (0:00:02.815) 0:00:43.389 ******** 2026-04-11 06:33:44.822394 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:33:44.822417 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:33:44.822430 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:33:44.822441 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:33:44.822448 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:33:44.822454 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:33:44.822465 | orchestrator | 2026-04-11 06:33:44.822475 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-11 06:33:51.538731 | orchestrator | Saturday 11 April 2026 06:33:45 +0000 (0:00:04.124) 0:00:47.514 ******** 2026-04-11 06:33:51.538850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:33:51.538874 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:33:51.538905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:33:51.538919 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:33:51.538930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:33:51.538942 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:33:51.538974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:33:51.539089 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:33:51.539123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:33:51.539136 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:33:51.539148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:33:51.539159 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:33:51.539170 | orchestrator | 2026-04-11 06:33:51.539182 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-11 06:33:51.539193 | orchestrator | Saturday 11 April 2026 06:33:49 +0000 (0:00:03.640) 0:00:51.154 ******** 2026-04-11 06:33:51.539211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:33:51.539223 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:33:51.539235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:33:51.539255 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:33:51.539280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:01.824751 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:01.824899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:01.824920 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:01.824954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:01.824967 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:01.824978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:01.825042 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:01.825055 | orchestrator | 2026-04-11 06:34:01.825067 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-11 06:34:01.825080 | orchestrator | Saturday 11 April 2026 06:33:53 +0000 (0:00:03.621) 0:00:54.776 ******** 2026-04-11 06:34:01.825091 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:01.825102 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:01.825113 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:01.825123 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:01.825134 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:01.825145 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:01.825156 | orchestrator | 2026-04-11 06:34:01.825167 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-11 06:34:01.825178 | orchestrator | Saturday 11 April 2026 06:33:56 +0000 (0:00:03.306) 0:00:58.082 ******** 2026-04-11 06:34:01.825188 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:01.825199 | orchestrator | 2026-04-11 06:34:01.825210 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-11 06:34:01.825221 | orchestrator | Saturday 11 April 2026 06:33:57 +0000 (0:00:01.115) 0:00:59.198 ******** 2026-04-11 06:34:01.825232 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:01.825243 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:01.825256 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:01.825268 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:01.825282 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:01.825294 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:01.825307 | orchestrator | 2026-04-11 06:34:01.825320 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-11 06:34:01.825332 | orchestrator | Saturday 11 April 2026 06:33:59 +0000 (0:00:01.973) 0:01:01.172 ******** 2026-04-11 06:34:01.825371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:01.825387 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:01.825406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:01.825429 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:01.825442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:01.825457 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:01.825470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:01.825484 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:01.825505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:11.614817 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:11.614974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:11.615009 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:11.615055 | orchestrator | 2026-04-11 06:34:11.615105 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-11 06:34:11.615133 | orchestrator | Saturday 11 April 2026 06:34:02 +0000 (0:00:03.306) 0:01:04.479 ******** 2026-04-11 06:34:11.615149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:11.615173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:11.615196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:34:11.615275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:11.615316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:34:11.615375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:34:11.615391 | orchestrator | 2026-04-11 06:34:11.615405 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-11 06:34:11.615419 | orchestrator | Saturday 11 April 2026 06:34:06 +0000 (0:00:04.025) 0:01:08.505 ******** 2026-04-11 06:34:11.615432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:11.615459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:16.224213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:34:16.224347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:16.224366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:34:16.224380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:34:16.224392 | orchestrator | 2026-04-11 06:34:16.224404 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-11 06:34:16.224417 | orchestrator | Saturday 11 April 2026 06:34:14 +0000 (0:00:07.027) 0:01:15.532 ******** 2026-04-11 06:34:16.224450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:16.224470 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:16.224489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:16.224502 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:16.224514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:16.224525 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:16.224537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:16.224549 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:16.224577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:36.181880 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:36.182155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:36.182200 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:36.182220 | orchestrator | 2026-04-11 06:34:36.182239 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-11 06:34:36.182259 | orchestrator | Saturday 11 April 2026 06:34:17 +0000 (0:00:03.322) 0:01:18.854 ******** 2026-04-11 06:34:36.182276 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:36.182296 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:36.182315 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:34:36.182335 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:36.182354 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:34:36.182373 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:34:36.182392 | orchestrator | 2026-04-11 06:34:36.182412 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-11 06:34:36.182432 | orchestrator | Saturday 11 April 2026 06:34:21 +0000 (0:00:03.927) 0:01:22.782 ******** 2026-04-11 06:34:36.182454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:36.182474 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:36.182496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:36.182515 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:36.182568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:36.182589 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:36.182650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:36.182678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:36.182702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:34:36.182723 | orchestrator | 2026-04-11 06:34:36.182742 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-11 06:34:36.182771 | orchestrator | Saturday 11 April 2026 06:34:25 +0000 (0:00:04.739) 0:01:27.522 ******** 2026-04-11 06:34:36.182790 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:36.182809 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:36.182828 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:36.182847 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:36.182867 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:36.182886 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:36.182905 | orchestrator | 2026-04-11 06:34:36.182924 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-11 06:34:36.182945 | orchestrator | Saturday 11 April 2026 06:34:29 +0000 (0:00:03.442) 0:01:30.964 ******** 2026-04-11 06:34:36.182962 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:36.182979 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:36.182997 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:36.183016 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:36.183033 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:36.183086 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:36.183105 | orchestrator | 2026-04-11 06:34:36.183123 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-11 06:34:36.183141 | orchestrator | Saturday 11 April 2026 06:34:32 +0000 (0:00:03.372) 0:01:34.336 ******** 2026-04-11 06:34:36.183157 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:36.183175 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:36.183193 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:36.183211 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:36.183230 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:36.183249 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:36.183267 | orchestrator | 2026-04-11 06:34:36.183287 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-11 06:34:36.183324 | orchestrator | Saturday 11 April 2026 06:34:36 +0000 (0:00:03.363) 0:01:37.700 ******** 2026-04-11 06:34:51.451393 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:51.451514 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:51.451531 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:51.451543 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:51.451554 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:51.451569 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:51.451588 | orchestrator | 2026-04-11 06:34:51.451609 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-11 06:34:51.451642 | orchestrator | Saturday 11 April 2026 06:34:39 +0000 (0:00:03.435) 0:01:41.135 ******** 2026-04-11 06:34:51.451654 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:51.451665 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:51.451676 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:51.451687 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:51.451697 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:51.451708 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:51.451719 | orchestrator | 2026-04-11 06:34:51.451730 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-11 06:34:51.451741 | orchestrator | Saturday 11 April 2026 06:34:43 +0000 (0:00:03.439) 0:01:44.574 ******** 2026-04-11 06:34:51.451751 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 06:34:51.451763 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:51.451774 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 06:34:51.451785 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:51.451795 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 06:34:51.451806 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:51.451817 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 06:34:51.451847 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:51.451859 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 06:34:51.451870 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:51.451880 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-11 06:34:51.451891 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:51.451902 | orchestrator | 2026-04-11 06:34:51.451913 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-11 06:34:51.451924 | orchestrator | Saturday 11 April 2026 06:34:46 +0000 (0:00:03.577) 0:01:48.152 ******** 2026-04-11 06:34:51.451942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:51.451958 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:34:51.451970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:51.451981 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:34:51.452018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:51.452032 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:34:51.452044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:34:51.452104 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:34:51.452125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:51.452137 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:34:51.452148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:34:51.452159 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:34:51.452170 | orchestrator | 2026-04-11 06:34:51.452181 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-11 06:34:51.452192 | orchestrator | Saturday 11 April 2026 06:34:50 +0000 (0:00:03.458) 0:01:51.611 ******** 2026-04-11 06:34:51.452219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:35:25.891166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:35:25.891305 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.891325 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.891339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:35:25.891352 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.891380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:35:25.891393 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.891405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:35:25.891416 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.891462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:35:25.891485 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.891496 | orchestrator | 2026-04-11 06:35:25.891508 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-11 06:35:25.891520 | orchestrator | Saturday 11 April 2026 06:34:53 +0000 (0:00:03.572) 0:01:55.183 ******** 2026-04-11 06:35:25.891532 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.891542 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.891553 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.891564 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.891575 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.891586 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.891597 | orchestrator | 2026-04-11 06:35:25.891608 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-11 06:35:25.891619 | orchestrator | Saturday 11 April 2026 06:34:57 +0000 (0:00:03.493) 0:01:58.677 ******** 2026-04-11 06:35:25.891630 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.891641 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.891651 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.891662 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:35:25.891673 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:35:25.891684 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:35:25.891695 | orchestrator | 2026-04-11 06:35:25.891706 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-11 06:35:25.891717 | orchestrator | Saturday 11 April 2026 06:35:02 +0000 (0:00:05.325) 0:02:04.002 ******** 2026-04-11 06:35:25.891728 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.891739 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.891751 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.891762 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.891772 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.891783 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.891794 | orchestrator | 2026-04-11 06:35:25.891805 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-11 06:35:25.891816 | orchestrator | Saturday 11 April 2026 06:35:05 +0000 (0:00:03.163) 0:02:07.166 ******** 2026-04-11 06:35:25.891827 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.891838 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.891848 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.891859 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.891870 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.891881 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.891891 | orchestrator | 2026-04-11 06:35:25.891902 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-11 06:35:25.891913 | orchestrator | Saturday 11 April 2026 06:35:09 +0000 (0:00:03.602) 0:02:10.768 ******** 2026-04-11 06:35:25.891924 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.891935 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.891946 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.891957 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.891968 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.891978 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.891989 | orchestrator | 2026-04-11 06:35:25.892000 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-11 06:35:25.892018 | orchestrator | Saturday 11 April 2026 06:35:12 +0000 (0:00:03.436) 0:02:14.204 ******** 2026-04-11 06:35:25.892029 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.892040 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.892051 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.892061 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.892072 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.892104 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.892116 | orchestrator | 2026-04-11 06:35:25.892127 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-11 06:35:25.892138 | orchestrator | Saturday 11 April 2026 06:35:16 +0000 (0:00:03.750) 0:02:17.955 ******** 2026-04-11 06:35:25.892149 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.892160 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.892170 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.892181 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.892192 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.892203 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.892214 | orchestrator | 2026-04-11 06:35:25.892225 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-11 06:35:25.892236 | orchestrator | Saturday 11 April 2026 06:35:20 +0000 (0:00:03.596) 0:02:21.551 ******** 2026-04-11 06:35:25.892247 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.892258 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.892269 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.892279 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.892290 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.892301 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.892312 | orchestrator | 2026-04-11 06:35:25.892323 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-11 06:35:25.892334 | orchestrator | Saturday 11 April 2026 06:35:23 +0000 (0:00:03.504) 0:02:25.056 ******** 2026-04-11 06:35:25.892345 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:25.892356 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:25.892367 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:25.892378 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:25.892389 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:25.892400 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:25.892411 | orchestrator | 2026-04-11 06:35:25.892436 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-11 06:35:36.489850 | orchestrator | Saturday 11 April 2026 06:35:26 +0000 (0:00:03.385) 0:02:28.442 ******** 2026-04-11 06:35:36.489963 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 06:35:36.489982 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:36.489995 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 06:35:36.490006 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:36.490083 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 06:35:36.490163 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 06:35:36.490175 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:36.490186 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:36.490197 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 06:35:36.490208 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:36.490219 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-11 06:35:36.490230 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:36.490241 | orchestrator | 2026-04-11 06:35:36.490253 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-11 06:35:36.490289 | orchestrator | Saturday 11 April 2026 06:35:30 +0000 (0:00:03.463) 0:02:31.905 ******** 2026-04-11 06:35:36.490307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:35:36.490324 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:36.490336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:35:36.490348 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:35:36.490395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:35:36.490411 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:36.490425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:35:36.490449 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:36.490462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:35:36.490475 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:35:36.490488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:35:36.490501 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:35:36.490514 | orchestrator | 2026-04-11 06:35:36.490527 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-11 06:35:36.490540 | orchestrator | Saturday 11 April 2026 06:35:33 +0000 (0:00:03.600) 0:02:35.506 ******** 2026-04-11 06:35:36.490554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:35:36.490584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:35:41.807769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:35:41.807879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:35:41.807906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:35:41.807947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-11 06:35:41.807970 | orchestrator | 2026-04-11 06:35:41.807991 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-11 06:35:41.808011 | orchestrator | Saturday 11 April 2026 06:35:37 +0000 (0:00:03.983) 0:02:39.489 ******** 2026-04-11 06:35:41.808032 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 06:35:41.808045 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:35:41.808078 | orchestrator | } 2026-04-11 06:35:41.808181 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 06:35:41.808197 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:35:41.808208 | orchestrator | } 2026-04-11 06:35:41.808219 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 06:35:41.808230 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:35:41.808240 | orchestrator | } 2026-04-11 06:35:41.808251 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 06:35:41.808261 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:35:41.808272 | orchestrator | } 2026-04-11 06:35:41.808283 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 06:35:41.808294 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:35:41.808305 | orchestrator | } 2026-04-11 06:35:41.808317 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 06:35:41.808347 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:35:41.808358 | orchestrator | } 2026-04-11 06:35:41.808369 | orchestrator | 2026-04-11 06:35:41.808381 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 06:35:41.808392 | orchestrator | Saturday 11 April 2026 06:35:40 +0000 (0:00:02.057) 0:02:41.546 ******** 2026-04-11 06:35:41.808404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:35:41.808417 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:35:41.808430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:35:41.808441 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:35:41.808453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:35:41.808473 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:35:41.808500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:38:42.071485 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:38:42.071573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:38:42.071583 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:38:42.071588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-11 06:38:42.071594 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:38:42.071599 | orchestrator | 2026-04-11 06:38:42.071604 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 06:38:42.071610 | orchestrator | Saturday 11 April 2026 06:35:43 +0000 (0:00:03.957) 0:02:45.504 ******** 2026-04-11 06:38:42.071615 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:38:42.071620 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:38:42.071625 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:38:42.071630 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:38:42.071635 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:38:42.071640 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:38:42.071644 | orchestrator | 2026-04-11 06:38:42.071650 | orchestrator | TASK [neutron : Running Neutron database expand container] ********************* 2026-04-11 06:38:42.071654 | orchestrator | Saturday 11 April 2026 06:35:45 +0000 (0:00:01.898) 0:02:47.403 ******** 2026-04-11 06:38:42.071659 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:38:42.071680 | orchestrator | 2026-04-11 06:38:42.071685 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071690 | orchestrator | Saturday 11 April 2026 06:36:19 +0000 (0:00:33.158) 0:03:20.561 ******** 2026-04-11 06:38:42.071695 | orchestrator | 2026-04-11 06:38:42.071700 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071705 | orchestrator | Saturday 11 April 2026 06:36:19 +0000 (0:00:00.438) 0:03:21.000 ******** 2026-04-11 06:38:42.071710 | orchestrator | 2026-04-11 06:38:42.071714 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071719 | orchestrator | Saturday 11 April 2026 06:36:19 +0000 (0:00:00.447) 0:03:21.447 ******** 2026-04-11 06:38:42.071724 | orchestrator | 2026-04-11 06:38:42.071729 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071734 | orchestrator | Saturday 11 April 2026 06:36:20 +0000 (0:00:00.632) 0:03:22.080 ******** 2026-04-11 06:38:42.071738 | orchestrator | 2026-04-11 06:38:42.071743 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071748 | orchestrator | Saturday 11 April 2026 06:36:20 +0000 (0:00:00.418) 0:03:22.499 ******** 2026-04-11 06:38:42.071753 | orchestrator | 2026-04-11 06:38:42.071758 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071771 | orchestrator | Saturday 11 April 2026 06:36:21 +0000 (0:00:00.447) 0:03:22.947 ******** 2026-04-11 06:38:42.071776 | orchestrator | 2026-04-11 06:38:42.071781 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-11 06:38:42.071786 | orchestrator | Saturday 11 April 2026 06:36:22 +0000 (0:00:00.797) 0:03:23.744 ******** 2026-04-11 06:38:42.071791 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:38:42.071796 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:38:42.071801 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:38:42.071806 | orchestrator | 2026-04-11 06:38:42.071810 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-11 06:38:42.071815 | orchestrator | Saturday 11 April 2026 06:37:09 +0000 (0:00:47.297) 0:04:11.041 ******** 2026-04-11 06:38:42.071820 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:38:42.071825 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:38:42.071829 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:38:42.071834 | orchestrator | 2026-04-11 06:38:42.071839 | orchestrator | TASK [neutron : Checking neutron pending contract scripts] ********************* 2026-04-11 06:38:42.071844 | orchestrator | Saturday 11 April 2026 06:38:14 +0000 (0:01:05.375) 0:05:16.417 ******** 2026-04-11 06:38:42.071849 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:38:42.071853 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:38:42.071858 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:38:42.071873 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:38:42.071878 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:38:42.071883 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:38:42.071888 | orchestrator | 2026-04-11 06:38:42.071893 | orchestrator | TASK [neutron : Stopping all neutron-server for contract db] ******************* 2026-04-11 06:38:42.071898 | orchestrator | Saturday 11 April 2026 06:38:16 +0000 (0:00:01.944) 0:05:18.362 ******** 2026-04-11 06:38:42.071903 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:38:42.071907 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:38:42.071912 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:38:42.071917 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:38:42.071922 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:38:42.071926 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:38:42.071931 | orchestrator | 2026-04-11 06:38:42.071936 | orchestrator | TASK [neutron : Running Neutron database contract container] ******************* 2026-04-11 06:38:42.071941 | orchestrator | Saturday 11 April 2026 06:38:21 +0000 (0:00:04.864) 0:05:23.226 ******** 2026-04-11 06:38:42.071946 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:38:42.071950 | orchestrator | 2026-04-11 06:38:42.071955 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071965 | orchestrator | Saturday 11 April 2026 06:38:36 +0000 (0:00:14.382) 0:05:37.608 ******** 2026-04-11 06:38:42.071969 | orchestrator | 2026-04-11 06:38:42.071974 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071979 | orchestrator | Saturday 11 April 2026 06:38:36 +0000 (0:00:00.441) 0:05:38.050 ******** 2026-04-11 06:38:42.071984 | orchestrator | 2026-04-11 06:38:42.071989 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.071993 | orchestrator | Saturday 11 April 2026 06:38:36 +0000 (0:00:00.468) 0:05:38.518 ******** 2026-04-11 06:38:42.071998 | orchestrator | 2026-04-11 06:38:42.072003 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.072008 | orchestrator | Saturday 11 April 2026 06:38:37 +0000 (0:00:00.469) 0:05:38.988 ******** 2026-04-11 06:38:42.072013 | orchestrator | 2026-04-11 06:38:42.072017 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.072022 | orchestrator | Saturday 11 April 2026 06:38:37 +0000 (0:00:00.445) 0:05:39.433 ******** 2026-04-11 06:38:42.072027 | orchestrator | 2026-04-11 06:38:42.072032 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-11 06:38:42.072036 | orchestrator | Saturday 11 April 2026 06:38:38 +0000 (0:00:00.453) 0:05:39.888 ******** 2026-04-11 06:38:42.072041 | orchestrator | 2026-04-11 06:38:42.072046 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-11 06:38:42.072051 | orchestrator | Saturday 11 April 2026 06:38:39 +0000 (0:00:00.807) 0:05:40.695 ******** 2026-04-11 06:38:42.072055 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:38:42.072060 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:38:42.072065 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:38:42.072070 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:38:42.072075 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:38:42.072079 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:38:42.072084 | orchestrator | 2026-04-11 06:38:42.072089 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 06:38:42.072095 | orchestrator | testbed-node-0 : ok=21  changed=8  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-11 06:38:42.072101 | orchestrator | testbed-node-1 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-11 06:38:42.072106 | orchestrator | testbed-node-2 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-11 06:38:42.072111 | orchestrator | testbed-node-3 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-11 06:38:42.072116 | orchestrator | testbed-node-4 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-11 06:38:42.072120 | orchestrator | testbed-node-5 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-11 06:38:42.072125 | orchestrator | 2026-04-11 06:38:42.072130 | orchestrator | 2026-04-11 06:38:42.072135 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 06:38:42.072143 | orchestrator | Saturday 11 April 2026 06:38:42 +0000 (0:00:02.879) 0:05:43.575 ******** 2026-04-11 06:38:42.072148 | orchestrator | =============================================================================== 2026-04-11 06:38:42.072153 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 65.38s 2026-04-11 06:38:42.072157 | orchestrator | neutron : Restart neutron-server container ----------------------------- 47.30s 2026-04-11 06:38:42.072162 | orchestrator | neutron : Running Neutron database expand container -------------------- 33.16s 2026-04-11 06:38:42.072171 | orchestrator | neutron : Running Neutron database contract container ------------------ 14.38s 2026-04-11 06:38:42.072176 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.03s 2026-04-11 06:38:42.072181 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.33s 2026-04-11 06:38:42.072186 | orchestrator | neutron : Stopping all neutron-server for contract db ------------------- 4.86s 2026-04-11 06:38:42.072190 | orchestrator | Setting sysctl values --------------------------------------------------- 4.81s 2026-04-11 06:38:42.072195 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.74s 2026-04-11 06:38:42.072203 | orchestrator | Load and persist kernel modules ----------------------------------------- 4.15s 2026-04-11 06:38:42.459768 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.12s 2026-04-11 06:38:42.459863 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.03s 2026-04-11 06:38:42.459875 | orchestrator | service-check-containers : neutron | Check containers ------------------- 3.98s 2026-04-11 06:38:42.459884 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.96s 2026-04-11 06:38:42.459893 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.93s 2026-04-11 06:38:42.459902 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.89s 2026-04-11 06:38:42.459911 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.75s 2026-04-11 06:38:42.459920 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.64s 2026-04-11 06:38:42.459929 | orchestrator | neutron : include_tasks ------------------------------------------------- 3.62s 2026-04-11 06:38:42.459938 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.62s 2026-04-11 06:38:42.659971 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-11 06:38:42.660055 | orchestrator | + osism apply -a reconfigure nova 2026-04-11 06:38:43.978788 | orchestrator | 2026-04-11 06:38:43 | INFO  | Prepare task for execution of nova. 2026-04-11 06:38:44.042693 | orchestrator | 2026-04-11 06:38:44 | INFO  | Task 4a6fdcaa-2e07-4370-9c43-ec712235ed55 (nova) was prepared for execution. 2026-04-11 06:38:44.042795 | orchestrator | 2026-04-11 06:38:44 | INFO  | It takes a moment until task 4a6fdcaa-2e07-4370-9c43-ec712235ed55 (nova) has been started and output is visible here. 2026-04-11 06:41:04.776652 | orchestrator | 2026-04-11 06:41:04.776752 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 06:41:04.776765 | orchestrator | 2026-04-11 06:41:04.776774 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-11 06:41:04.776783 | orchestrator | Saturday 11 April 2026 06:38:49 +0000 (0:00:01.766) 0:00:01.766 ******** 2026-04-11 06:41:04.776791 | orchestrator | changed: [testbed-manager] 2026-04-11 06:41:04.776813 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:41:04.776821 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:41:04.776829 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:41:04.776837 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:41:04.776845 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:41:04.776853 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:41:04.776861 | orchestrator | 2026-04-11 06:41:04.776869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 06:41:04.776877 | orchestrator | Saturday 11 April 2026 06:38:52 +0000 (0:00:03.662) 0:00:05.428 ******** 2026-04-11 06:41:04.776885 | orchestrator | changed: [testbed-manager] 2026-04-11 06:41:04.776893 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:41:04.776901 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:41:04.776909 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:41:04.776916 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:41:04.776925 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:41:04.776933 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:41:04.776941 | orchestrator | 2026-04-11 06:41:04.776949 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 06:41:04.776976 | orchestrator | Saturday 11 April 2026 06:38:55 +0000 (0:00:02.098) 0:00:07.526 ******** 2026-04-11 06:41:04.776985 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-11 06:41:04.776994 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-11 06:41:04.777001 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-11 06:41:04.777009 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-11 06:41:04.777017 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-11 06:41:04.777024 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-11 06:41:04.777032 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-11 06:41:04.777040 | orchestrator | 2026-04-11 06:41:04.777048 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-11 06:41:04.777056 | orchestrator | 2026-04-11 06:41:04.777063 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-11 06:41:04.777071 | orchestrator | Saturday 11 April 2026 06:38:58 +0000 (0:00:03.149) 0:00:10.676 ******** 2026-04-11 06:41:04.777079 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:41:04.777087 | orchestrator | 2026-04-11 06:41:04.777107 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-11 06:41:04.777115 | orchestrator | Saturday 11 April 2026 06:39:01 +0000 (0:00:03.091) 0:00:13.768 ******** 2026-04-11 06:41:04.777124 | orchestrator | ok: [testbed-node-0] => (item=nova_cell0) 2026-04-11 06:41:04.777132 | orchestrator | ok: [testbed-node-0] => (item=nova_api) 2026-04-11 06:41:04.777140 | orchestrator | 2026-04-11 06:41:04.777148 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-11 06:41:04.777156 | orchestrator | Saturday 11 April 2026 06:39:06 +0000 (0:00:05.193) 0:00:18.961 ******** 2026-04-11 06:41:04.777164 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 06:41:04.777171 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 06:41:04.777179 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777187 | orchestrator | 2026-04-11 06:41:04.777195 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-11 06:41:04.777203 | orchestrator | Saturday 11 April 2026 06:39:11 +0000 (0:00:05.395) 0:00:24.357 ******** 2026-04-11 06:41:04.777211 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777219 | orchestrator | 2026-04-11 06:41:04.777227 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-11 06:41:04.777234 | orchestrator | Saturday 11 April 2026 06:39:13 +0000 (0:00:01.664) 0:00:26.022 ******** 2026-04-11 06:41:04.777242 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777250 | orchestrator | 2026-04-11 06:41:04.777258 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-11 06:41:04.777265 | orchestrator | Saturday 11 April 2026 06:39:15 +0000 (0:00:02.072) 0:00:28.094 ******** 2026-04-11 06:41:04.777273 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:41:04.777281 | orchestrator | 2026-04-11 06:41:04.777289 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 06:41:04.777297 | orchestrator | Saturday 11 April 2026 06:39:19 +0000 (0:00:03.833) 0:00:31.927 ******** 2026-04-11 06:41:04.777304 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:41:04.777312 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.777338 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.777346 | orchestrator | 2026-04-11 06:41:04.777354 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-11 06:41:04.777362 | orchestrator | Saturday 11 April 2026 06:39:21 +0000 (0:00:01.694) 0:00:33.622 ******** 2026-04-11 06:41:04.777370 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777377 | orchestrator | 2026-04-11 06:41:04.777385 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-11 06:41:04.777393 | orchestrator | Saturday 11 April 2026 06:39:55 +0000 (0:00:34.013) 0:01:07.635 ******** 2026-04-11 06:41:04.777407 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777415 | orchestrator | 2026-04-11 06:41:04.777423 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-11 06:41:04.777431 | orchestrator | Saturday 11 April 2026 06:40:10 +0000 (0:00:15.661) 0:01:23.297 ******** 2026-04-11 06:41:04.777439 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777447 | orchestrator | 2026-04-11 06:41:04.777454 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-11 06:41:04.777462 | orchestrator | Saturday 11 April 2026 06:40:25 +0000 (0:00:14.809) 0:01:38.106 ******** 2026-04-11 06:41:04.777470 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777478 | orchestrator | 2026-04-11 06:41:04.777499 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-11 06:41:04.777507 | orchestrator | Saturday 11 April 2026 06:40:27 +0000 (0:00:02.010) 0:01:40.117 ******** 2026-04-11 06:41:04.777515 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:41:04.777523 | orchestrator | 2026-04-11 06:41:04.777531 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 06:41:04.777539 | orchestrator | Saturday 11 April 2026 06:40:29 +0000 (0:00:01.646) 0:01:41.764 ******** 2026-04-11 06:41:04.777547 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:41:04.777555 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.777563 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.777571 | orchestrator | 2026-04-11 06:41:04.777579 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-11 06:41:04.777586 | orchestrator | Saturday 11 April 2026 06:40:30 +0000 (0:00:01.526) 0:01:43.290 ******** 2026-04-11 06:41:04.777594 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:41:04.777602 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.777610 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.777618 | orchestrator | 2026-04-11 06:41:04.777626 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-11 06:41:04.777633 | orchestrator | 2026-04-11 06:41:04.777642 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-11 06:41:04.777649 | orchestrator | Saturday 11 April 2026 06:40:32 +0000 (0:00:01.698) 0:01:44.989 ******** 2026-04-11 06:41:04.777657 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:41:04.777665 | orchestrator | 2026-04-11 06:41:04.777673 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-11 06:41:04.777681 | orchestrator | Saturday 11 April 2026 06:40:34 +0000 (0:00:01.726) 0:01:46.715 ******** 2026-04-11 06:41:04.777689 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.777697 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.777705 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777713 | orchestrator | 2026-04-11 06:41:04.777720 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-11 06:41:04.777728 | orchestrator | Saturday 11 April 2026 06:40:37 +0000 (0:00:02.993) 0:01:49.709 ******** 2026-04-11 06:41:04.777736 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.777744 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.777752 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.777760 | orchestrator | 2026-04-11 06:41:04.777767 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-11 06:41:04.777775 | orchestrator | Saturday 11 April 2026 06:40:40 +0000 (0:00:03.586) 0:01:53.295 ******** 2026-04-11 06:41:04.777783 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-11 06:41:04.777791 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.777803 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-11 06:41:04.777811 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.777819 | orchestrator | ok: [testbed-node-0] => (item=openstack) 2026-04-11 06:41:04.777827 | orchestrator | 2026-04-11 06:41:04.777835 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-11 06:41:04.777849 | orchestrator | Saturday 11 April 2026 06:40:45 +0000 (0:00:04.820) 0:01:58.115 ******** 2026-04-11 06:41:04.777857 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 06:41:04.777865 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.777872 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 06:41:04.777880 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.777888 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-11 06:41:04.777896 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-11 06:41:04.777904 | orchestrator | 2026-04-11 06:41:04.777912 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-11 06:41:04.777919 | orchestrator | Saturday 11 April 2026 06:40:57 +0000 (0:00:11.854) 0:02:09.970 ******** 2026-04-11 06:41:04.777927 | orchestrator | skipping: [testbed-node-0] => (item=openstack)  2026-04-11 06:41:04.777935 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:41:04.777943 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-11 06:41:04.777950 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.777958 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-11 06:41:04.777966 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.777973 | orchestrator | 2026-04-11 06:41:04.777981 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-11 06:41:04.777989 | orchestrator | Saturday 11 April 2026 06:40:59 +0000 (0:00:01.561) 0:02:11.531 ******** 2026-04-11 06:41:04.777997 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-11 06:41:04.778005 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:41:04.778053 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-11 06:41:04.778062 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.778069 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-11 06:41:04.778077 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.778085 | orchestrator | 2026-04-11 06:41:04.778093 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-11 06:41:04.778101 | orchestrator | Saturday 11 April 2026 06:41:01 +0000 (0:00:02.023) 0:02:13.555 ******** 2026-04-11 06:41:04.778109 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.778116 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.778124 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.778132 | orchestrator | 2026-04-11 06:41:04.778140 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-11 06:41:04.778148 | orchestrator | Saturday 11 April 2026 06:41:02 +0000 (0:00:01.530) 0:02:15.085 ******** 2026-04-11 06:41:04.778164 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:41:04.778172 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:41:04.778180 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:41:04.778187 | orchestrator | 2026-04-11 06:41:04.778195 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-11 06:41:04.778203 | orchestrator | Saturday 11 April 2026 06:41:04 +0000 (0:00:01.957) 0:02:17.043 ******** 2026-04-11 06:41:04.778216 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:31.621362 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:31.621530 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:42:31.621548 | orchestrator | 2026-04-11 06:42:31.621561 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-11 06:42:31.621573 | orchestrator | Saturday 11 April 2026 06:41:08 +0000 (0:00:03.635) 0:02:20.678 ******** 2026-04-11 06:42:31.621584 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:31.621596 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:31.621607 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:42:31.621619 | orchestrator | 2026-04-11 06:42:31.621629 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-11 06:42:31.621640 | orchestrator | Saturday 11 April 2026 06:41:21 +0000 (0:00:13.061) 0:02:33.740 ******** 2026-04-11 06:42:31.621674 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:31.621686 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:31.621697 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:42:31.621707 | orchestrator | 2026-04-11 06:42:31.621718 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-11 06:42:31.621729 | orchestrator | Saturday 11 April 2026 06:41:34 +0000 (0:00:12.972) 0:02:46.712 ******** 2026-04-11 06:42:31.621739 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:42:31.621750 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:31.621761 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:31.621773 | orchestrator | 2026-04-11 06:42:31.621783 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-11 06:42:31.621794 | orchestrator | Saturday 11 April 2026 06:41:36 +0000 (0:00:02.255) 0:02:48.968 ******** 2026-04-11 06:42:31.621805 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:42:31.621816 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:31.621826 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:31.621837 | orchestrator | 2026-04-11 06:42:31.621848 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-11 06:42:31.621858 | orchestrator | Saturday 11 April 2026 06:41:38 +0000 (0:00:01.979) 0:02:50.948 ******** 2026-04-11 06:42:31.621869 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:31.621879 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:31.621890 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:42:31.621903 | orchestrator | 2026-04-11 06:42:31.621915 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-11 06:42:31.621927 | orchestrator | Saturday 11 April 2026 06:41:52 +0000 (0:00:13.666) 0:03:04.615 ******** 2026-04-11 06:42:31.621939 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:42:31.621952 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:31.621965 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:31.621977 | orchestrator | 2026-04-11 06:42:31.621990 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-11 06:42:31.622002 | orchestrator | 2026-04-11 06:42:31.622132 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 06:42:31.622148 | orchestrator | Saturday 11 April 2026 06:41:53 +0000 (0:00:01.548) 0:03:06.163 ******** 2026-04-11 06:42:31.622161 | orchestrator | included: /ansible/roles/nova/tasks/reconfigure.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:42:31.622174 | orchestrator | 2026-04-11 06:42:31.622186 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-11 06:42:31.622199 | orchestrator | Saturday 11 April 2026 06:41:55 +0000 (0:00:01.943) 0:03:08.107 ******** 2026-04-11 06:42:31.622211 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-11 06:42:31.622224 | orchestrator | ok: [testbed-node-0] => (item=nova (compute)) 2026-04-11 06:42:31.622236 | orchestrator | 2026-04-11 06:42:31.622248 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-11 06:42:31.622259 | orchestrator | Saturday 11 April 2026 06:41:59 +0000 (0:00:04.196) 0:03:12.303 ******** 2026-04-11 06:42:31.622270 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-11 06:42:31.622282 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-11 06:42:31.622293 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-11 06:42:31.622304 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-11 06:42:31.622315 | orchestrator | 2026-04-11 06:42:31.622325 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-11 06:42:31.622336 | orchestrator | Saturday 11 April 2026 06:42:07 +0000 (0:00:07.472) 0:03:19.775 ******** 2026-04-11 06:42:31.622347 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-11 06:42:31.622366 | orchestrator | 2026-04-11 06:42:31.622404 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-11 06:42:31.622416 | orchestrator | Saturday 11 April 2026 06:42:11 +0000 (0:00:04.184) 0:03:23.960 ******** 2026-04-11 06:42:31.622426 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-11 06:42:31.622437 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-11 06:42:31.622448 | orchestrator | 2026-04-11 06:42:31.622458 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-11 06:42:31.622469 | orchestrator | Saturday 11 April 2026 06:42:17 +0000 (0:00:05.823) 0:03:29.783 ******** 2026-04-11 06:42:31.622480 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-11 06:42:31.622490 | orchestrator | 2026-04-11 06:42:31.622501 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-11 06:42:31.622511 | orchestrator | Saturday 11 April 2026 06:42:21 +0000 (0:00:04.210) 0:03:33.994 ******** 2026-04-11 06:42:31.622522 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-11 06:42:31.622534 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> service) 2026-04-11 06:42:31.622544 | orchestrator | 2026-04-11 06:42:31.622573 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-11 06:42:31.622584 | orchestrator | Saturday 11 April 2026 06:42:29 +0000 (0:00:08.365) 0:03:42.360 ******** 2026-04-11 06:42:31.622601 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:31.622624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:31.622638 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:31.622668 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:42.955007 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:42:42.955139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:42.955158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:42.955196 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:42:42.955209 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:42:42.955222 | orchestrator | 2026-04-11 06:42:42.955235 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-11 06:42:42.955248 | orchestrator | Saturday 11 April 2026 06:42:33 +0000 (0:00:03.559) 0:03:45.919 ******** 2026-04-11 06:42:42.955278 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:42:42.955292 | orchestrator | 2026-04-11 06:42:42.955303 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-11 06:42:42.955315 | orchestrator | Saturday 11 April 2026 06:42:34 +0000 (0:00:01.121) 0:03:47.040 ******** 2026-04-11 06:42:42.955326 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:42:42.955338 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:42.955349 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:42.955360 | orchestrator | 2026-04-11 06:42:42.955372 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-11 06:42:42.955452 | orchestrator | Saturday 11 April 2026 06:42:35 +0000 (0:00:01.420) 0:03:48.460 ******** 2026-04-11 06:42:42.955463 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 06:42:42.955474 | orchestrator | 2026-04-11 06:42:42.955485 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-11 06:42:42.955496 | orchestrator | Saturday 11 April 2026 06:42:38 +0000 (0:00:02.161) 0:03:50.622 ******** 2026-04-11 06:42:42.955507 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:42:42.955518 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:42.955529 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:42.955542 | orchestrator | 2026-04-11 06:42:42.955555 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 06:42:42.955568 | orchestrator | Saturday 11 April 2026 06:42:39 +0000 (0:00:01.376) 0:03:51.999 ******** 2026-04-11 06:42:42.955580 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:42:42.955593 | orchestrator | 2026-04-11 06:42:42.955606 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-11 06:42:42.955618 | orchestrator | Saturday 11 April 2026 06:42:41 +0000 (0:00:01.999) 0:03:53.999 ******** 2026-04-11 06:42:42.955638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:42.955663 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:42.955688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:46.574208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:46.574352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:46.574372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:46.574463 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:42:46.574498 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:42:46.574512 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:42:46.574533 | orchestrator | 2026-04-11 06:42:46.574547 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-11 06:42:46.574560 | orchestrator | Saturday 11 April 2026 06:42:45 +0000 (0:00:04.239) 0:03:58.239 ******** 2026-04-11 06:42:46.574579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:46.574593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:46.574606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:42:46.574619 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:42:46.574643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:48.512199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:48.512321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:42:48.512338 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:48.512348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:48.512356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:48.512450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:42:48.512471 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:48.512480 | orchestrator | 2026-04-11 06:42:48.512487 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-11 06:42:48.512496 | orchestrator | Saturday 11 April 2026 06:42:47 +0000 (0:00:02.138) 0:04:00.377 ******** 2026-04-11 06:42:48.512504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:48.512512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:48.512521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:42:48.512535 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:42:48.512551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:51.934797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:51.934901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:42:51.934916 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:42:51.934931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:51.934972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:42:51.935014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:42:51.935026 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:42:51.935037 | orchestrator | 2026-04-11 06:42:51.935048 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-11 06:42:51.935059 | orchestrator | Saturday 11 April 2026 06:42:49 +0000 (0:00:01.857) 0:04:02.234 ******** 2026-04-11 06:42:51.935069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:51.935081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:51.935099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:42:51.935123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:00.248215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:00.248329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:00.248371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:00.248452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:00.248467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:00.248479 | orchestrator | 2026-04-11 06:43:00.248492 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-11 06:43:00.248523 | orchestrator | Saturday 11 April 2026 06:42:54 +0000 (0:00:04.694) 0:04:06.929 ******** 2026-04-11 06:43:00.248536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:00.248549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:00.248575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:00.248596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:05.178944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:05.179063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:05.179110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:05.179139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:05.179152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:05.179164 | orchestrator | 2026-04-11 06:43:05.179177 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-11 06:43:05.179209 | orchestrator | Saturday 11 April 2026 06:43:04 +0000 (0:00:10.158) 0:04:17.087 ******** 2026-04-11 06:43:05.179230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:43:05.179263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:43:05.179285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:43:05.179305 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:43:05.179329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:43:05.179353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:43:23.324011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:43:23.324120 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:43:23.324138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:43:23.324170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:43:23.324184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:43:23.324196 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:43:23.324207 | orchestrator | 2026-04-11 06:43:23.324220 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-11 06:43:23.324231 | orchestrator | Saturday 11 April 2026 06:43:06 +0000 (0:00:01.806) 0:04:18.894 ******** 2026-04-11 06:43:23.324242 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:43:23.324275 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:43:23.324286 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:43:23.324297 | orchestrator | 2026-04-11 06:43:23.324308 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-11 06:43:23.324319 | orchestrator | Saturday 11 April 2026 06:43:08 +0000 (0:00:01.802) 0:04:20.697 ******** 2026-04-11 06:43:23.324330 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:43:23.324341 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:43:23.324352 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:43:23.324363 | orchestrator | 2026-04-11 06:43:23.324374 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-11 06:43:23.324430 | orchestrator | Saturday 11 April 2026 06:43:10 +0000 (0:00:02.054) 0:04:22.752 ******** 2026-04-11 06:43:23.324449 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-11 06:43:23.324460 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-11 06:43:23.324472 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:43:23.324483 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-11 06:43:23.324494 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-11 06:43:23.324504 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:43:23.324515 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-11 06:43:23.324526 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-11 06:43:23.324540 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:43:23.324552 | orchestrator | 2026-04-11 06:43:23.324565 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-11 06:43:23.324578 | orchestrator | Saturday 11 April 2026 06:43:11 +0000 (0:00:01.668) 0:04:24.420 ******** 2026-04-11 06:43:23.324591 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-11 06:43:23.324606 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-11 06:43:23.324618 | orchestrator | 2026-04-11 06:43:23.324630 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-11 06:43:23.324643 | orchestrator | Saturday 11 April 2026 06:43:14 +0000 (0:00:02.598) 0:04:27.019 ******** 2026-04-11 06:43:23.324656 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:43:23.324669 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:43:23.324681 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:43:23.324693 | orchestrator | 2026-04-11 06:43:23.324706 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-11 06:43:23.324719 | orchestrator | Saturday 11 April 2026 06:43:18 +0000 (0:00:03.608) 0:04:30.628 ******** 2026-04-11 06:43:23.324731 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:43:23.324744 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:43:23.324757 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:43:23.324770 | orchestrator | 2026-04-11 06:43:23.324782 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-11 06:43:23.324794 | orchestrator | Saturday 11 April 2026 06:43:21 +0000 (0:00:03.341) 0:04:33.970 ******** 2026-04-11 06:43:23.324814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:23.324838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:23.324863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:27.776693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:27.776819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:27.776860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:43:27.776875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:27.776907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:27.776920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:43:27.776932 | orchestrator | 2026-04-11 06:43:27.776945 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-11 06:43:27.776957 | orchestrator | Saturday 11 April 2026 06:43:25 +0000 (0:00:04.415) 0:04:38.385 ******** 2026-04-11 06:43:27.776969 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 06:43:27.776981 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:43:27.776992 | orchestrator | } 2026-04-11 06:43:27.777011 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 06:43:27.777028 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:43:27.777039 | orchestrator | } 2026-04-11 06:43:27.777050 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 06:43:27.777060 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:43:27.777071 | orchestrator | } 2026-04-11 06:43:27.777082 | orchestrator | 2026-04-11 06:43:27.777093 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 06:43:27.777104 | orchestrator | Saturday 11 April 2026 06:43:27 +0000 (0:00:01.425) 0:04:39.810 ******** 2026-04-11 06:43:27.777116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:43:27.777129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:43:27.777149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:45:09.059892 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:45:09.060016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:45:09.060061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:45:09.060078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:45:09.060091 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:45:09.060105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:45:09.060137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:45:09.060162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:45:09.060175 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:45:09.060187 | orchestrator | 2026-04-11 06:45:09.060200 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 06:45:09.060213 | orchestrator | Saturday 11 April 2026 06:43:29 +0000 (0:00:02.375) 0:04:42.186 ******** 2026-04-11 06:45:09.060224 | orchestrator | 2026-04-11 06:45:09.060236 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 06:45:09.060247 | orchestrator | Saturday 11 April 2026 06:43:30 +0000 (0:00:00.745) 0:04:42.931 ******** 2026-04-11 06:45:09.060258 | orchestrator | 2026-04-11 06:45:09.060270 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 06:45:09.060281 | orchestrator | Saturday 11 April 2026 06:43:30 +0000 (0:00:00.500) 0:04:43.432 ******** 2026-04-11 06:45:09.060293 | orchestrator | 2026-04-11 06:45:09.060304 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-11 06:45:09.060315 | orchestrator | Saturday 11 April 2026 06:43:31 +0000 (0:00:00.872) 0:04:44.304 ******** 2026-04-11 06:45:09.060326 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:45:09.060338 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:45:09.060349 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:45:09.060360 | orchestrator | 2026-04-11 06:45:09.060371 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-11 06:45:09.060383 | orchestrator | Saturday 11 April 2026 06:44:05 +0000 (0:00:33.260) 0:05:17.564 ******** 2026-04-11 06:45:09.060394 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:45:09.060405 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:45:09.060417 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:45:09.060428 | orchestrator | 2026-04-11 06:45:09.060439 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-11 06:45:09.060451 | orchestrator | Saturday 11 April 2026 06:44:18 +0000 (0:00:13.859) 0:05:31.424 ******** 2026-04-11 06:45:09.060462 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:45:09.060522 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:45:09.060542 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:45:09.060560 | orchestrator | 2026-04-11 06:45:09.060572 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-11 06:45:09.060583 | orchestrator | 2026-04-11 06:45:09.060593 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 06:45:09.060604 | orchestrator | Saturday 11 April 2026 06:44:30 +0000 (0:00:11.154) 0:05:42.578 ******** 2026-04-11 06:45:09.060615 | orchestrator | included: /ansible/roles/nova-cell/tasks/reconfigure.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:45:09.060627 | orchestrator | 2026-04-11 06:45:09.060638 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 06:45:09.060649 | orchestrator | Saturday 11 April 2026 06:44:32 +0000 (0:00:02.482) 0:05:45.061 ******** 2026-04-11 06:45:09.060668 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:45:09.060679 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:45:09.060690 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:45:09.060700 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:45:09.060711 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:45:09.060722 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:45:09.060733 | orchestrator | 2026-04-11 06:45:09.060744 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-11 06:45:09.060755 | orchestrator | Saturday 11 April 2026 06:44:34 +0000 (0:00:02.270) 0:05:47.332 ******** 2026-04-11 06:45:09.060765 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:45:09.060776 | orchestrator | 2026-04-11 06:45:09.060787 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-11 06:45:09.060798 | orchestrator | Saturday 11 April 2026 06:45:07 +0000 (0:00:32.800) 0:06:20.132 ******** 2026-04-11 06:45:09.060809 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:45:09.060820 | orchestrator | 2026-04-11 06:45:09.060839 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-11 06:46:00.313713 | orchestrator | Saturday 11 April 2026 06:45:10 +0000 (0:00:02.385) 0:06:22.518 ******** 2026-04-11 06:46:00.313835 | orchestrator | included: service-image-info for testbed-node-3 2026-04-11 06:46:00.313853 | orchestrator | 2026-04-11 06:46:00.313865 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-11 06:46:00.313877 | orchestrator | Saturday 11 April 2026 06:45:12 +0000 (0:00:02.050) 0:06:24.569 ******** 2026-04-11 06:46:00.313887 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:46:00.313899 | orchestrator | 2026-04-11 06:46:00.313910 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-11 06:46:00.313929 | orchestrator | Saturday 11 April 2026 06:45:16 +0000 (0:00:04.346) 0:06:28.915 ******** 2026-04-11 06:46:00.313949 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:46:00.313968 | orchestrator | 2026-04-11 06:46:00.313988 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-11 06:46:00.313999 | orchestrator | Saturday 11 April 2026 06:45:19 +0000 (0:00:03.016) 0:06:31.931 ******** 2026-04-11 06:46:00.314011 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:46:00.314090 | orchestrator | 2026-04-11 06:46:00.314102 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-11 06:46:00.314113 | orchestrator | Saturday 11 April 2026 06:45:22 +0000 (0:00:03.045) 0:06:34.977 ******** 2026-04-11 06:46:00.314124 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:46:00.314135 | orchestrator | 2026-04-11 06:46:00.314175 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-11 06:46:00.314197 | orchestrator | Saturday 11 April 2026 06:45:25 +0000 (0:00:03.161) 0:06:38.139 ******** 2026-04-11 06:46:00.314208 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:00.314220 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:00.314231 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:00.314242 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:46:00.314253 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:46:00.314264 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:46:00.314275 | orchestrator | 2026-04-11 06:46:00.314286 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-11 06:46:00.314296 | orchestrator | Saturday 11 April 2026 06:45:30 +0000 (0:00:04.953) 0:06:43.092 ******** 2026-04-11 06:46:00.314307 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:00.314318 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:00.314329 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:00.314341 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:46:00.314352 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:46:00.314363 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:46:00.314381 | orchestrator | 2026-04-11 06:46:00.314400 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-11 06:46:00.314421 | orchestrator | Saturday 11 April 2026 06:45:36 +0000 (0:00:05.882) 0:06:48.975 ******** 2026-04-11 06:46:00.314457 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:00.314469 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:00.314480 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:00.314491 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 06:46:00.314521 | orchestrator |  "changed": false, 2026-04-11 06:46:00.314533 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-11 06:46:00.314545 | orchestrator | } 2026-04-11 06:46:00.314556 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 06:46:00.314567 | orchestrator |  "changed": false, 2026-04-11 06:46:00.314578 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-11 06:46:00.314589 | orchestrator | } 2026-04-11 06:46:00.314599 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 06:46:00.314610 | orchestrator |  "changed": false, 2026-04-11 06:46:00.314621 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-11 06:46:00.314631 | orchestrator | } 2026-04-11 06:46:00.314642 | orchestrator | 2026-04-11 06:46:00.314653 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-11 06:46:00.314664 | orchestrator | Saturday 11 April 2026 06:45:44 +0000 (0:00:07.698) 0:06:56.673 ******** 2026-04-11 06:46:00.314674 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:00.314685 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:00.314697 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:00.314717 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:46:00.314736 | orchestrator | 2026-04-11 06:46:00.314753 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 06:46:00.314764 | orchestrator | Saturday 11 April 2026 06:45:46 +0000 (0:00:02.285) 0:06:58.959 ******** 2026-04-11 06:46:00.314775 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-11 06:46:00.314786 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-11 06:46:00.314799 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-11 06:46:00.314818 | orchestrator | 2026-04-11 06:46:00.314833 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 06:46:00.314848 | orchestrator | Saturday 11 April 2026 06:45:48 +0000 (0:00:01.675) 0:07:00.634 ******** 2026-04-11 06:46:00.314865 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-11 06:46:00.314883 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-11 06:46:00.314901 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-11 06:46:00.314919 | orchestrator | 2026-04-11 06:46:00.314938 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 06:46:00.314956 | orchestrator | Saturday 11 April 2026 06:45:50 +0000 (0:00:02.235) 0:07:02.869 ******** 2026-04-11 06:46:00.314974 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-11 06:46:00.314992 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:46:00.315011 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-11 06:46:00.315030 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:46:00.315043 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-11 06:46:00.315053 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:46:00.315064 | orchestrator | 2026-04-11 06:46:00.315075 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-11 06:46:00.315106 | orchestrator | Saturday 11 April 2026 06:45:51 +0000 (0:00:01.372) 0:07:04.242 ******** 2026-04-11 06:46:00.315118 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 06:46:00.315129 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 06:46:00.315140 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:00.315151 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 06:46:00.315172 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 06:46:00.315183 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 06:46:00.315194 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 06:46:00.315205 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 06:46:00.315216 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:00.315227 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 06:46:00.315237 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 06:46:00.315248 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 06:46:00.315259 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 06:46:00.315276 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:00.315288 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 06:46:00.315298 | orchestrator | 2026-04-11 06:46:00.315309 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-11 06:46:00.315320 | orchestrator | Saturday 11 April 2026 06:45:54 +0000 (0:00:02.307) 0:07:06.549 ******** 2026-04-11 06:46:00.315331 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:00.315342 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:00.315352 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:00.315363 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:46:00.315374 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:46:00.315385 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:46:00.315395 | orchestrator | 2026-04-11 06:46:00.315406 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-11 06:46:00.315417 | orchestrator | Saturday 11 April 2026 06:45:56 +0000 (0:00:02.214) 0:07:08.764 ******** 2026-04-11 06:46:00.315427 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:00.315441 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:00.315460 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:00.315478 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:46:00.315551 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:46:00.315576 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:46:00.315595 | orchestrator | 2026-04-11 06:46:00.315608 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-11 06:46:00.315619 | orchestrator | Saturday 11 April 2026 06:45:58 +0000 (0:00:02.627) 0:07:11.391 ******** 2026-04-11 06:46:00.315633 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:46:00.315649 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:46:00.315683 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444700 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444807 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444827 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444840 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444852 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444937 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444962 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444975 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:01.444987 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:01.445000 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:01.445019 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:46:01.445039 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994135 | orchestrator | 2026-04-11 06:46:07.994249 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 06:46:07.994265 | orchestrator | Saturday 11 April 2026 06:46:02 +0000 (0:00:03.656) 0:07:15.048 ******** 2026-04-11 06:46:07.994278 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:46:07.994290 | orchestrator | 2026-04-11 06:46:07.994301 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-11 06:46:07.994312 | orchestrator | Saturday 11 April 2026 06:46:04 +0000 (0:00:02.282) 0:07:17.330 ******** 2026-04-11 06:46:07.994343 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994360 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994372 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994437 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994455 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994468 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994480 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994491 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:07.994612 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:11.312486 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:11.312654 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:11.312667 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:11.312676 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:46:11.312699 | orchestrator | 2026-04-11 06:46:11.312707 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-11 06:46:11.312715 | orchestrator | Saturday 11 April 2026 06:46:09 +0000 (0:00:04.764) 0:07:22.094 ******** 2026-04-11 06:46:11.312724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:46:11.312751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:46:11.312759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:46:11.312767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:46:11.312779 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:46:11.312786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:46:11.312794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:46:11.312808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:46:14.177831 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:46:14.177995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:46:14.178104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:46:14.178131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:46:14.178193 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:46:14.178216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:46:14.178238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:46:14.178259 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:14.178279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:46:14.178299 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:14.178354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:46:14.178380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:46:14.178413 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:14.178434 | orchestrator | 2026-04-11 06:46:14.178456 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-11 06:46:14.178478 | orchestrator | Saturday 11 April 2026 06:46:13 +0000 (0:00:03.598) 0:07:25.693 ******** 2026-04-11 06:46:14.178499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:46:14.178552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:46:14.178575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:46:14.178624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:46:19.690313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:46:19.690451 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:46:19.690472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:46:19.690487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:46:19.690498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:46:19.690573 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:46:19.690586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:46:19.690598 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:46:19.690657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:46:19.690691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:46:19.690704 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:19.690716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:46:19.690728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:46:19.690740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:46:19.690752 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:19.690763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:46:19.690774 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:19.690785 | orchestrator | 2026-04-11 06:46:19.690798 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 06:46:19.690809 | orchestrator | Saturday 11 April 2026 06:46:16 +0000 (0:00:03.465) 0:07:29.159 ******** 2026-04-11 06:46:19.690827 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:46:19.690844 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:46:19.690857 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:46:19.690871 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:46:19.690885 | orchestrator | 2026-04-11 06:46:19.690898 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-11 06:46:19.690911 | orchestrator | Saturday 11 April 2026 06:46:18 +0000 (0:00:02.285) 0:07:31.444 ******** 2026-04-11 06:46:19.690931 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 06:47:05.562624 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 06:47:05.562743 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 06:47:05.562759 | orchestrator | 2026-04-11 06:47:05.562771 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-11 06:47:05.562784 | orchestrator | Saturday 11 April 2026 06:46:21 +0000 (0:00:02.270) 0:07:33.715 ******** 2026-04-11 06:47:05.562795 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 06:47:05.562806 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 06:47:05.562817 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 06:47:05.562828 | orchestrator | 2026-04-11 06:47:05.562839 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-11 06:47:05.562850 | orchestrator | Saturday 11 April 2026 06:46:23 +0000 (0:00:02.037) 0:07:35.753 ******** 2026-04-11 06:47:05.562861 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:47:05.562872 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:47:05.562883 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:47:05.562893 | orchestrator | 2026-04-11 06:47:05.562904 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-11 06:47:05.562915 | orchestrator | Saturday 11 April 2026 06:46:24 +0000 (0:00:01.523) 0:07:37.277 ******** 2026-04-11 06:47:05.562926 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:47:05.562937 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:47:05.562947 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:47:05.562958 | orchestrator | 2026-04-11 06:47:05.562969 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-11 06:47:05.562980 | orchestrator | Saturday 11 April 2026 06:46:26 +0000 (0:00:01.780) 0:07:39.057 ******** 2026-04-11 06:47:05.562991 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-11 06:47:05.563002 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-11 06:47:05.563012 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-11 06:47:05.563023 | orchestrator | 2026-04-11 06:47:05.563034 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-11 06:47:05.563045 | orchestrator | Saturday 11 April 2026 06:46:28 +0000 (0:00:02.217) 0:07:41.274 ******** 2026-04-11 06:47:05.563056 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-11 06:47:05.563067 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-11 06:47:05.563077 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-11 06:47:05.563088 | orchestrator | 2026-04-11 06:47:05.563101 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-11 06:47:05.563113 | orchestrator | Saturday 11 April 2026 06:46:31 +0000 (0:00:02.249) 0:07:43.523 ******** 2026-04-11 06:47:05.563126 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-11 06:47:05.563139 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-11 06:47:05.563152 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-11 06:47:05.563164 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-11 06:47:05.563176 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-11 06:47:05.563189 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-11 06:47:05.563201 | orchestrator | 2026-04-11 06:47:05.563215 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-11 06:47:05.563227 | orchestrator | Saturday 11 April 2026 06:46:35 +0000 (0:00:04.763) 0:07:48.287 ******** 2026-04-11 06:47:05.563264 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:05.563279 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:05.563292 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:05.563351 | orchestrator | 2026-04-11 06:47:05.563366 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-11 06:47:05.563379 | orchestrator | Saturday 11 April 2026 06:46:37 +0000 (0:00:01.576) 0:07:49.863 ******** 2026-04-11 06:47:05.563392 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:05.563404 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:05.563416 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:05.563429 | orchestrator | 2026-04-11 06:47:05.563442 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-11 06:47:05.563455 | orchestrator | Saturday 11 April 2026 06:46:38 +0000 (0:00:01.381) 0:07:51.245 ******** 2026-04-11 06:47:05.563467 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:47:05.563478 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:47:05.563488 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:47:05.563499 | orchestrator | 2026-04-11 06:47:05.563510 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-11 06:47:05.563520 | orchestrator | Saturday 11 April 2026 06:46:41 +0000 (0:00:02.249) 0:07:53.494 ******** 2026-04-11 06:47:05.563554 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-11 06:47:05.563567 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-11 06:47:05.563577 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-11 06:47:05.563603 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-11 06:47:05.563615 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-11 06:47:05.563646 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-11 06:47:05.563658 | orchestrator | 2026-04-11 06:47:05.563669 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-11 06:47:05.563680 | orchestrator | Saturday 11 April 2026 06:46:45 +0000 (0:00:04.548) 0:07:58.042 ******** 2026-04-11 06:47:05.563691 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-11 06:47:05.563702 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-11 06:47:05.563713 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-11 06:47:05.563723 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-11 06:47:05.563734 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:47:05.563745 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-11 06:47:05.563755 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:47:05.563766 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-11 06:47:05.563777 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:47:05.563788 | orchestrator | 2026-04-11 06:47:05.563798 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-11 06:47:05.563809 | orchestrator | Saturday 11 April 2026 06:46:50 +0000 (0:00:04.493) 0:08:02.536 ******** 2026-04-11 06:47:05.563820 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:05.563830 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:05.563850 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:05.563862 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:47:05.563873 | orchestrator | 2026-04-11 06:47:05.563884 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-11 06:47:05.563894 | orchestrator | Saturday 11 April 2026 06:46:53 +0000 (0:00:03.452) 0:08:05.989 ******** 2026-04-11 06:47:05.563905 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 06:47:05.563916 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 06:47:05.563926 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 06:47:05.563937 | orchestrator | 2026-04-11 06:47:05.563947 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-11 06:47:05.563958 | orchestrator | Saturday 11 April 2026 06:46:55 +0000 (0:00:02.149) 0:08:08.139 ******** 2026-04-11 06:47:05.563969 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:05.563980 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:05.563990 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:05.564001 | orchestrator | 2026-04-11 06:47:05.564011 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-11 06:47:05.564022 | orchestrator | Saturday 11 April 2026 06:46:56 +0000 (0:00:01.335) 0:08:09.474 ******** 2026-04-11 06:47:05.564033 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:05.564043 | orchestrator | 2026-04-11 06:47:05.564054 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-11 06:47:05.564065 | orchestrator | Saturday 11 April 2026 06:46:58 +0000 (0:00:01.132) 0:08:10.606 ******** 2026-04-11 06:47:05.564075 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:05.564086 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:05.564096 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:05.564107 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:05.564117 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:05.564128 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:05.564139 | orchestrator | 2026-04-11 06:47:05.564149 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-11 06:47:05.564160 | orchestrator | Saturday 11 April 2026 06:47:00 +0000 (0:00:01.923) 0:08:12.530 ******** 2026-04-11 06:47:05.564171 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 06:47:05.564182 | orchestrator | 2026-04-11 06:47:05.564192 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-11 06:47:05.564203 | orchestrator | Saturday 11 April 2026 06:47:01 +0000 (0:00:01.859) 0:08:14.390 ******** 2026-04-11 06:47:05.564214 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:05.564224 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:05.564235 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:05.564246 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:05.564256 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:05.564267 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:05.564278 | orchestrator | 2026-04-11 06:47:05.564288 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-11 06:47:05.564299 | orchestrator | Saturday 11 April 2026 06:47:03 +0000 (0:00:01.728) 0:08:16.118 ******** 2026-04-11 06:47:05.564318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:47:05.564350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:08.639986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:47:08.640006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:14.378681 | orchestrator | 2026-04-11 06:47:14.378783 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-11 06:47:14.378801 | orchestrator | Saturday 11 April 2026 06:47:09 +0000 (0:00:06.101) 0:08:22.219 ******** 2026-04-11 06:47:14.378816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:47:14.378827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:47:14.378847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:47:14.378870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:47:14.378877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:47:14.378897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:47:14.378905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:14.378914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:14.378930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:47:14.378937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:47:14.378949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:37.174926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:47:37.175023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:37.175037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:37.175078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:47:37.175090 | orchestrator | 2026-04-11 06:47:37.175101 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-11 06:47:37.175111 | orchestrator | Saturday 11 April 2026 06:47:17 +0000 (0:00:08.272) 0:08:30.492 ******** 2026-04-11 06:47:37.175120 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:37.175129 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:37.175138 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:37.175147 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:37.175155 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:37.175164 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:37.175172 | orchestrator | 2026-04-11 06:47:37.175181 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-11 06:47:37.175190 | orchestrator | Saturday 11 April 2026 06:47:20 +0000 (0:00:02.883) 0:08:33.375 ******** 2026-04-11 06:47:37.175199 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 06:47:37.175208 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 06:47:37.175217 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 06:47:37.175225 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 06:47:37.175234 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 06:47:37.175242 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 06:47:37.175252 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:37.175261 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 06:47:37.175270 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 06:47:37.175278 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:37.175287 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 06:47:37.175296 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:37.175320 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 06:47:37.175329 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 06:47:37.175338 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 06:47:37.175347 | orchestrator | 2026-04-11 06:47:37.175356 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-11 06:47:37.175364 | orchestrator | Saturday 11 April 2026 06:47:25 +0000 (0:00:04.978) 0:08:38.354 ******** 2026-04-11 06:47:37.175373 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:37.175388 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:37.175397 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:37.175406 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:37.175415 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:37.175423 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:37.175432 | orchestrator | 2026-04-11 06:47:37.175441 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-11 06:47:37.175449 | orchestrator | Saturday 11 April 2026 06:47:27 +0000 (0:00:01.760) 0:08:40.115 ******** 2026-04-11 06:47:37.175459 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 06:47:37.175470 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 06:47:37.175480 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 06:47:37.175490 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 06:47:37.175500 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 06:47:37.175510 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 06:47:37.175520 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 06:47:37.175530 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 06:47:37.175540 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:37.175575 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 06:47:37.175586 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 06:47:37.175595 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 06:47:37.175605 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:37.175620 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:47:37.175630 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 06:47:37.175640 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:37.175650 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:47:37.175659 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:47:37.175669 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:47:37.175679 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:47:37.175689 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:47:37.175699 | orchestrator | 2026-04-11 06:47:37.175709 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-11 06:47:37.175719 | orchestrator | Saturday 11 April 2026 06:47:34 +0000 (0:00:06.572) 0:08:46.687 ******** 2026-04-11 06:47:37.175729 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 06:47:37.175739 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 06:47:37.175749 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 06:47:37.175759 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 06:47:37.175775 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:47:37.175785 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:47:37.175796 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:47:37.175806 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 06:47:37.175816 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 06:47:37.175825 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 06:47:37.175839 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 06:47:53.828605 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 06:47:53.828715 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:53.828730 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 06:47:53.828740 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:47:53.828750 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 06:47:53.828760 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:53.828770 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:47:53.828780 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 06:47:53.828790 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:53.828799 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:47:53.828809 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:47:53.828819 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:47:53.828829 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:47:53.828838 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:47:53.828848 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:47:53.828858 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:47:53.828867 | orchestrator | 2026-04-11 06:47:53.828878 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-11 06:47:53.828887 | orchestrator | Saturday 11 April 2026 06:47:42 +0000 (0:00:08.027) 0:08:54.715 ******** 2026-04-11 06:47:53.828897 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:53.828906 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:53.828916 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:53.828925 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:53.828935 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:53.828945 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:53.828954 | orchestrator | 2026-04-11 06:47:53.828964 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-11 06:47:53.828973 | orchestrator | Saturday 11 April 2026 06:47:44 +0000 (0:00:01.957) 0:08:56.672 ******** 2026-04-11 06:47:53.828983 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:53.828993 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:53.829002 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:53.829011 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:53.829021 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:53.829030 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:53.829040 | orchestrator | 2026-04-11 06:47:53.829064 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-11 06:47:53.829094 | orchestrator | Saturday 11 April 2026 06:47:45 +0000 (0:00:01.806) 0:08:58.479 ******** 2026-04-11 06:47:53.829105 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:53.829116 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:53.829127 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:53.829139 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:47:53.829151 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:47:53.829161 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:47:53.829172 | orchestrator | 2026-04-11 06:47:53.829183 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-11 06:47:53.829195 | orchestrator | Saturday 11 April 2026 06:47:49 +0000 (0:00:03.292) 0:09:01.771 ******** 2026-04-11 06:47:53.829207 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:53.829218 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:47:53.829229 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:53.829239 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:53.829250 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:47:53.829261 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:47:53.829272 | orchestrator | 2026-04-11 06:47:53.829283 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-11 06:47:53.829295 | orchestrator | Saturday 11 April 2026 06:47:52 +0000 (0:00:03.009) 0:09:04.781 ******** 2026-04-11 06:47:53.829309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:47:53.829344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:47:53.829359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:47:53.829370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:47:53.829394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:47:53.829406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:47:53.829418 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:53.829430 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:53.829449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:47:58.977257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:47:58.978231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:47:58.978290 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:58.978320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:47:58.978333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:47:58.978345 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:58.978357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:47:58.978369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:47:58.978380 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:58.978413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:47:58.978425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:47:58.978444 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:58.978455 | orchestrator | 2026-04-11 06:47:58.978468 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-11 06:47:58.978480 | orchestrator | Saturday 11 April 2026 06:47:55 +0000 (0:00:03.046) 0:09:07.828 ******** 2026-04-11 06:47:58.978490 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-11 06:47:58.978502 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-11 06:47:58.978513 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:47:58.978523 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-11 06:47:58.978534 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-11 06:47:58.978545 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:47:58.978596 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-11 06:47:58.978609 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-11 06:47:58.978620 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:47:58.978631 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-11 06:47:58.978642 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-11 06:47:58.978652 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:47:58.978663 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-11 06:47:58.978674 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-11 06:47:58.978685 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:47:58.978696 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-11 06:47:58.978707 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-11 06:47:58.978718 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:47:58.978729 | orchestrator | 2026-04-11 06:47:58.978739 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-11 06:47:58.978750 | orchestrator | Saturday 11 April 2026 06:47:57 +0000 (0:00:01.861) 0:09:09.689 ******** 2026-04-11 06:47:58.978763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:47:58.978785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:48:00.506842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507102 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:48:00.507373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:48:04.989369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:48:04.989488 | orchestrator | 2026-04-11 06:48:04.989500 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-11 06:48:04.989510 | orchestrator | Saturday 11 April 2026 06:48:01 +0000 (0:00:04.479) 0:09:14.169 ******** 2026-04-11 06:48:04.989518 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 06:48:04.989527 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:48:04.989535 | orchestrator | } 2026-04-11 06:48:04.989542 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 06:48:04.989550 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:48:04.989557 | orchestrator | } 2026-04-11 06:48:04.989564 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 06:48:04.989624 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:48:04.989632 | orchestrator | } 2026-04-11 06:48:04.989639 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 06:48:04.989646 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:48:04.989653 | orchestrator | } 2026-04-11 06:48:04.989660 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 06:48:04.989668 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:48:04.989675 | orchestrator | } 2026-04-11 06:48:04.989682 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 06:48:04.989689 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:48:04.989696 | orchestrator | } 2026-04-11 06:48:04.989704 | orchestrator | 2026-04-11 06:48:04.989729 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 06:48:04.989738 | orchestrator | Saturday 11 April 2026 06:48:03 +0000 (0:00:02.012) 0:09:16.182 ******** 2026-04-11 06:48:04.989747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:48:04.989758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:48:04.989792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:48:04.989800 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:48:04.989827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:48:04.989836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:48:04.989848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:48:04.989856 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:48:04.989863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:48:04.989877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:48:04.989892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:50:53.154888 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:50:53.155015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:50:53.155052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:50:53.155067 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:50:53.155080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:50:53.155114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:50:53.155127 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:50:53.155138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:50:53.155150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:50:53.155161 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:50:53.155172 | orchestrator | 2026-04-11 06:50:53.155185 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 06:50:53.155213 | orchestrator | Saturday 11 April 2026 06:48:06 +0000 (0:00:03.036) 0:09:19.218 ******** 2026-04-11 06:50:53.155225 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:50:53.155236 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:50:53.155247 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:50:53.155258 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:50:53.155268 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:50:53.155279 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:50:53.155290 | orchestrator | 2026-04-11 06:50:53.155302 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:50:53.155312 | orchestrator | Saturday 11 April 2026 06:48:08 +0000 (0:00:01.750) 0:09:20.969 ******** 2026-04-11 06:50:53.155323 | orchestrator | 2026-04-11 06:50:53.155334 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:50:53.155345 | orchestrator | Saturday 11 April 2026 06:48:09 +0000 (0:00:00.525) 0:09:21.494 ******** 2026-04-11 06:50:53.155356 | orchestrator | 2026-04-11 06:50:53.155366 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:50:53.155377 | orchestrator | Saturday 11 April 2026 06:48:09 +0000 (0:00:00.693) 0:09:22.188 ******** 2026-04-11 06:50:53.155388 | orchestrator | 2026-04-11 06:50:53.155398 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:50:53.155410 | orchestrator | Saturday 11 April 2026 06:48:10 +0000 (0:00:00.523) 0:09:22.711 ******** 2026-04-11 06:50:53.155423 | orchestrator | 2026-04-11 06:50:53.155435 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:50:53.155456 | orchestrator | Saturday 11 April 2026 06:48:10 +0000 (0:00:00.531) 0:09:23.243 ******** 2026-04-11 06:50:53.155468 | orchestrator | 2026-04-11 06:50:53.155486 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:50:53.155497 | orchestrator | Saturday 11 April 2026 06:48:11 +0000 (0:00:00.556) 0:09:23.799 ******** 2026-04-11 06:50:53.155508 | orchestrator | 2026-04-11 06:50:53.155519 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-11 06:50:53.155529 | orchestrator | Saturday 11 April 2026 06:48:12 +0000 (0:00:00.859) 0:09:24.659 ******** 2026-04-11 06:50:53.155540 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:50:53.155551 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:50:53.155562 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:50:53.155573 | orchestrator | 2026-04-11 06:50:53.155584 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-11 06:50:53.155594 | orchestrator | Saturday 11 April 2026 06:48:27 +0000 (0:00:15.089) 0:09:39.748 ******** 2026-04-11 06:50:53.155605 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:50:53.155616 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:50:53.155626 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:50:53.155637 | orchestrator | 2026-04-11 06:50:53.155648 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-11 06:50:53.155659 | orchestrator | Saturday 11 April 2026 06:48:49 +0000 (0:00:22.367) 0:10:02.116 ******** 2026-04-11 06:50:53.155694 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:50:53.155705 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:50:53.155716 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:50:53.155727 | orchestrator | 2026-04-11 06:50:53.155738 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-11 06:50:53.155749 | orchestrator | Saturday 11 April 2026 06:49:16 +0000 (0:00:26.760) 0:10:28.876 ******** 2026-04-11 06:50:53.155760 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:50:53.155771 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:50:53.155781 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:50:53.155792 | orchestrator | 2026-04-11 06:50:53.155803 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-11 06:50:53.155814 | orchestrator | Saturday 11 April 2026 06:50:01 +0000 (0:00:44.842) 0:11:13.718 ******** 2026-04-11 06:50:53.155825 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:50:53.155836 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-11 06:50:53.155848 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-11 06:50:53.155859 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:50:53.155870 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:50:53.155880 | orchestrator | 2026-04-11 06:50:53.155891 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-11 06:50:53.155902 | orchestrator | Saturday 11 April 2026 06:50:08 +0000 (0:00:07.515) 0:11:21.234 ******** 2026-04-11 06:50:53.155913 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:50:53.155924 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:50:53.155935 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:50:53.155946 | orchestrator | 2026-04-11 06:50:53.155956 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-11 06:50:53.155967 | orchestrator | Saturday 11 April 2026 06:50:10 +0000 (0:00:01.776) 0:11:23.010 ******** 2026-04-11 06:50:53.155978 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:50:53.155988 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:50:53.155999 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:50:53.156010 | orchestrator | 2026-04-11 06:50:53.156021 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-11 06:50:53.156032 | orchestrator | Saturday 11 April 2026 06:50:42 +0000 (0:00:31.622) 0:11:54.633 ******** 2026-04-11 06:50:53.156043 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:50:53.156061 | orchestrator | 2026-04-11 06:50:53.156072 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-11 06:50:53.156083 | orchestrator | Saturday 11 April 2026 06:50:43 +0000 (0:00:01.543) 0:11:56.176 ******** 2026-04-11 06:50:53.156094 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:50:53.156105 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:50:53.156115 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:50:53.156126 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:50:53.156137 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:50:53.156148 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:50:53.156159 | orchestrator | 2026-04-11 06:50:53.156170 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-11 06:50:53.156187 | orchestrator | Saturday 11 April 2026 06:50:53 +0000 (0:00:09.457) 0:12:05.634 ******** 2026-04-11 06:51:55.490315 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:51:55.490435 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:51:55.490452 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:51:55.490465 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:51:55.490476 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:51:55.490507 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:51:55.490530 | orchestrator | 2026-04-11 06:51:55.490544 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-11 06:51:55.490557 | orchestrator | Saturday 11 April 2026 06:51:04 +0000 (0:00:11.447) 0:12:17.082 ******** 2026-04-11 06:51:55.490569 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:51:55.490581 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:51:55.490593 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:51:55.490605 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:51:55.490617 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:51:55.490629 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-11 06:51:55.490642 | orchestrator | 2026-04-11 06:51:55.490654 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-11 06:51:55.490666 | orchestrator | Saturday 11 April 2026 06:51:10 +0000 (0:00:05.572) 0:12:22.654 ******** 2026-04-11 06:51:55.490678 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:51:55.490690 | orchestrator | 2026-04-11 06:51:55.490758 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-11 06:51:55.490771 | orchestrator | Saturday 11 April 2026 06:51:24 +0000 (0:00:13.849) 0:12:36.504 ******** 2026-04-11 06:51:55.490803 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:51:55.490823 | orchestrator | 2026-04-11 06:51:55.490842 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-11 06:51:55.490860 | orchestrator | Saturday 11 April 2026 06:51:26 +0000 (0:00:02.952) 0:12:39.457 ******** 2026-04-11 06:51:55.490879 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:51:55.490897 | orchestrator | 2026-04-11 06:51:55.490916 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-11 06:51:55.490936 | orchestrator | Saturday 11 April 2026 06:51:29 +0000 (0:00:02.595) 0:12:42.052 ******** 2026-04-11 06:51:55.490955 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-11 06:51:55.490975 | orchestrator | 2026-04-11 06:51:55.490995 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-11 06:51:55.491014 | orchestrator | 2026-04-11 06:51:55.491031 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-11 06:51:55.491044 | orchestrator | Saturday 11 April 2026 06:51:42 +0000 (0:00:13.030) 0:12:55.082 ******** 2026-04-11 06:51:55.491057 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:51:55.491070 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:51:55.491082 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:51:55.491094 | orchestrator | 2026-04-11 06:51:55.491107 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-11 06:51:55.491144 | orchestrator | 2026-04-11 06:51:55.491157 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-11 06:51:55.491169 | orchestrator | Saturday 11 April 2026 06:51:44 +0000 (0:00:02.147) 0:12:57.230 ******** 2026-04-11 06:51:55.491181 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:51:55.491193 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:51:55.491205 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:51:55.491218 | orchestrator | 2026-04-11 06:51:55.491231 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-11 06:51:55.491243 | orchestrator | 2026-04-11 06:51:55.491255 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-11 06:51:55.491266 | orchestrator | Saturday 11 April 2026 06:51:46 +0000 (0:00:01.972) 0:12:59.202 ******** 2026-04-11 06:51:55.491276 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-11 06:51:55.491288 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-11 06:51:55.491299 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-11 06:51:55.491310 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-11 06:51:55.491321 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-11 06:51:55.491331 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-11 06:51:55.491342 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:51:55.491353 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-11 06:51:55.491364 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-11 06:51:55.491375 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-11 06:51:55.491385 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-11 06:51:55.491396 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-11 06:51:55.491406 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-11 06:51:55.491417 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:51:55.491428 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-11 06:51:55.491438 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-11 06:51:55.491449 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-11 06:51:55.491459 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-11 06:51:55.491470 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-11 06:51:55.491480 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-11 06:51:55.491491 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:51:55.491502 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-11 06:51:55.491512 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-11 06:51:55.491523 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-11 06:51:55.491534 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-11 06:51:55.491563 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-11 06:51:55.491575 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-11 06:51:55.491586 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:51:55.491597 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-11 06:51:55.491607 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-11 06:51:55.491618 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-11 06:51:55.491628 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-11 06:51:55.491639 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-11 06:51:55.491649 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-11 06:51:55.491660 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:51:55.491671 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-11 06:51:55.491690 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-11 06:51:55.491757 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-11 06:51:55.491771 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-11 06:51:55.491782 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-11 06:51:55.491793 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-11 06:51:55.491804 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:51:55.491814 | orchestrator | 2026-04-11 06:51:55.491833 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-11 06:51:55.491844 | orchestrator | 2026-04-11 06:51:55.491855 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-11 06:51:55.491866 | orchestrator | Saturday 11 April 2026 06:51:49 +0000 (0:00:02.712) 0:13:01.915 ******** 2026-04-11 06:51:55.491876 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-11 06:51:55.491887 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-11 06:51:55.491898 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:51:55.491909 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-11 06:51:55.491919 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-11 06:51:55.491930 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:51:55.491941 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-11 06:51:55.491951 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-11 06:51:55.491962 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:51:55.491973 | orchestrator | 2026-04-11 06:51:55.491983 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-11 06:51:55.491994 | orchestrator | 2026-04-11 06:51:55.492005 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-11 06:51:55.492016 | orchestrator | Saturday 11 April 2026 06:51:51 +0000 (0:00:01.937) 0:13:03.852 ******** 2026-04-11 06:51:55.492026 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:51:55.492037 | orchestrator | 2026-04-11 06:51:55.492048 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-11 06:51:55.492059 | orchestrator | 2026-04-11 06:51:55.492070 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-11 06:51:55.492080 | orchestrator | Saturday 11 April 2026 06:51:53 +0000 (0:00:01.980) 0:13:05.832 ******** 2026-04-11 06:51:55.492091 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:51:55.492102 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:51:55.492112 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:51:55.492123 | orchestrator | 2026-04-11 06:51:55.492134 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 06:51:55.492145 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 06:51:55.492159 | orchestrator | testbed-node-0 : ok=58  changed=25  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-11 06:51:55.492170 | orchestrator | testbed-node-1 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-11 06:51:55.492181 | orchestrator | testbed-node-2 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-11 06:51:55.492191 | orchestrator | testbed-node-3 : ok=49  changed=15  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-11 06:51:55.492202 | orchestrator | testbed-node-4 : ok=43  changed=14  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-11 06:51:55.492213 | orchestrator | testbed-node-5 : ok=48  changed=14  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-11 06:51:55.492231 | orchestrator | 2026-04-11 06:51:55.492242 | orchestrator | 2026-04-11 06:51:55.492253 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 06:51:55.492264 | orchestrator | Saturday 11 April 2026 06:51:55 +0000 (0:00:02.130) 0:13:07.963 ******** 2026-04-11 06:51:55.492275 | orchestrator | =============================================================================== 2026-04-11 06:51:55.492285 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.84s 2026-04-11 06:51:55.492296 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.01s 2026-04-11 06:51:55.492307 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 33.26s 2026-04-11 06:51:55.492326 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 32.80s 2026-04-11 06:51:55.932495 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.62s 2026-04-11 06:51:55.932604 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.76s 2026-04-11 06:51:55.932629 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 22.37s 2026-04-11 06:51:55.932650 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.66s 2026-04-11 06:51:55.932668 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 15.09s 2026-04-11 06:51:55.932680 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.81s 2026-04-11 06:51:55.932691 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.86s 2026-04-11 06:51:55.932762 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.85s 2026-04-11 06:51:55.932774 | orchestrator | nova-cell : Update cell ------------------------------------------------ 13.67s 2026-04-11 06:51:55.932785 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 13.06s 2026-04-11 06:51:55.932796 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.03s 2026-04-11 06:51:55.932807 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.97s 2026-04-11 06:51:55.932837 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 11.85s 2026-04-11 06:51:55.932848 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.45s 2026-04-11 06:51:55.932859 | orchestrator | nova : Restart nova-metadata container --------------------------------- 11.15s 2026-04-11 06:51:55.932870 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.16s 2026-04-11 06:51:56.139956 | orchestrator | + osism apply nova-update-cell-mappings 2026-04-11 06:52:07.700127 | orchestrator | 2026-04-11 06:52:07 | INFO  | Prepare task for execution of nova-update-cell-mappings. 2026-04-11 06:52:07.784272 | orchestrator | 2026-04-11 06:52:07 | INFO  | Task bcaaacdf-928a-44d6-8631-e0f344487d90 (nova-update-cell-mappings) was prepared for execution. 2026-04-11 06:52:07.784357 | orchestrator | 2026-04-11 06:52:07 | INFO  | It takes a moment until task bcaaacdf-928a-44d6-8631-e0f344487d90 (nova-update-cell-mappings) has been started and output is visible here. 2026-04-11 06:52:38.828496 | orchestrator | 2026-04-11 06:52:38.828599 | orchestrator | PLAY [Update Nova cell mappings] *********************************************** 2026-04-11 06:52:38.828611 | orchestrator | 2026-04-11 06:52:38.828620 | orchestrator | TASK [Get list of Nova cells] ************************************************** 2026-04-11 06:52:38.828629 | orchestrator | Saturday 11 April 2026 06:52:12 +0000 (0:00:01.568) 0:00:01.568 ******** 2026-04-11 06:52:38.828637 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:52:38.828646 | orchestrator | 2026-04-11 06:52:38.828654 | orchestrator | TASK [Parse cell information] ************************************************** 2026-04-11 06:52:38.828662 | orchestrator | Saturday 11 April 2026 06:52:27 +0000 (0:00:14.255) 0:00:15.823 ******** 2026-04-11 06:52:38.828670 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:52:38.828701 | orchestrator | 2026-04-11 06:52:38.828710 | orchestrator | TASK [Display cells to update] ************************************************* 2026-04-11 06:52:38.828718 | orchestrator | Saturday 11 April 2026 06:52:28 +0000 (0:00:01.216) 0:00:17.040 ******** 2026-04-11 06:52:38.828777 | orchestrator | ok: [testbed-node-0] => { 2026-04-11 06:52:38.828788 | orchestrator |  "msg": "Cells to update: [{'name': '', 'uuid': 'ef19a86e-7f24-470c-aebf-0ab315dfb0c0'}]" 2026-04-11 06:52:38.828797 | orchestrator | } 2026-04-11 06:52:38.828809 | orchestrator | 2026-04-11 06:52:38.828823 | orchestrator | TASK [Use specified cell UUID if provided] ************************************* 2026-04-11 06:52:38.828836 | orchestrator | Saturday 11 April 2026 06:52:29 +0000 (0:00:01.110) 0:00:18.151 ******** 2026-04-11 06:52:38.828849 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:52:38.828861 | orchestrator | 2026-04-11 06:52:38.828874 | orchestrator | TASK [Abort if multiple cells found without specific UUID and abort_on_multiple is enabled] *** 2026-04-11 06:52:38.828888 | orchestrator | Saturday 11 April 2026 06:52:30 +0000 (0:00:01.139) 0:00:19.290 ******** 2026-04-11 06:52:38.828900 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:52:38.828912 | orchestrator | 2026-04-11 06:52:38.828924 | orchestrator | TASK [Update Nova cell mappings] *********************************************** 2026-04-11 06:52:38.828936 | orchestrator | Saturday 11 April 2026 06:52:31 +0000 (0:00:01.102) 0:00:20.393 ******** 2026-04-11 06:52:38.828948 | orchestrator | changed: [testbed-node-0] => (item=ef19a86e-7f24-470c-aebf-0ab315dfb0c0) 2026-04-11 06:52:38.828961 | orchestrator | 2026-04-11 06:52:38.828974 | orchestrator | TASK [Display update results] ************************************************** 2026-04-11 06:52:38.828986 | orchestrator | Saturday 11 April 2026 06:52:36 +0000 (0:00:05.159) 0:00:25.552 ******** 2026-04-11 06:52:38.828997 | orchestrator | ok: [testbed-node-0] => (item=ef19a86e-7f24-470c-aebf-0ab315dfb0c0) => { 2026-04-11 06:52:38.829010 | orchestrator |  "msg": "Cell ef19a86e-7f24-470c-aebf-0ab315dfb0c0 updated successfully" 2026-04-11 06:52:38.829022 | orchestrator | } 2026-04-11 06:52:38.829034 | orchestrator | 2026-04-11 06:52:38.829046 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 06:52:38.829060 | orchestrator | testbed-node-0 : ok=5  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 06:52:38.829073 | orchestrator | 2026-04-11 06:52:38.829085 | orchestrator | 2026-04-11 06:52:38.829097 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 06:52:38.829109 | orchestrator | Saturday 11 April 2026 06:52:38 +0000 (0:00:01.604) 0:00:27.157 ******** 2026-04-11 06:52:38.829120 | orchestrator | =============================================================================== 2026-04-11 06:52:38.829132 | orchestrator | Get list of Nova cells ------------------------------------------------- 14.26s 2026-04-11 06:52:38.829144 | orchestrator | Update Nova cell mappings ----------------------------------------------- 5.16s 2026-04-11 06:52:38.829156 | orchestrator | Display update results -------------------------------------------------- 1.60s 2026-04-11 06:52:38.829168 | orchestrator | Parse cell information -------------------------------------------------- 1.22s 2026-04-11 06:52:38.829180 | orchestrator | Use specified cell UUID if provided ------------------------------------- 1.14s 2026-04-11 06:52:38.829192 | orchestrator | Display cells to update ------------------------------------------------- 1.11s 2026-04-11 06:52:38.829204 | orchestrator | Abort if multiple cells found without specific UUID and abort_on_multiple is enabled --- 1.10s 2026-04-11 06:52:39.031992 | orchestrator | + osism apply -a upgrade nova 2026-04-11 06:52:40.376501 | orchestrator | 2026-04-11 06:52:40 | INFO  | Prepare task for execution of nova. 2026-04-11 06:52:40.440046 | orchestrator | 2026-04-11 06:52:40 | INFO  | Task 0b2a30c9-752e-44df-a663-7571fea61a97 (nova) was prepared for execution. 2026-04-11 06:52:40.440142 | orchestrator | 2026-04-11 06:52:40 | INFO  | It takes a moment until task 0b2a30c9-752e-44df-a663-7571fea61a97 (nova) has been started and output is visible here. 2026-04-11 06:53:54.234221 | orchestrator | 2026-04-11 06:53:54.234356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 06:53:54.234368 | orchestrator | 2026-04-11 06:53:54.234376 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-11 06:53:54.234383 | orchestrator | Saturday 11 April 2026 06:52:46 +0000 (0:00:02.471) 0:00:02.471 ******** 2026-04-11 06:53:54.234390 | orchestrator | changed: [testbed-manager] 2026-04-11 06:53:54.234397 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:53:54.234404 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:53:54.234411 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:53:54.234417 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:53:54.234424 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:53:54.234430 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:53:54.234437 | orchestrator | 2026-04-11 06:53:54.234444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 06:53:54.234451 | orchestrator | Saturday 11 April 2026 06:52:49 +0000 (0:00:03.537) 0:00:06.009 ******** 2026-04-11 06:53:54.234458 | orchestrator | changed: [testbed-manager] 2026-04-11 06:53:54.234465 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:53:54.234471 | orchestrator | changed: [testbed-node-1] 2026-04-11 06:53:54.234478 | orchestrator | changed: [testbed-node-2] 2026-04-11 06:53:54.234485 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:53:54.234491 | orchestrator | changed: [testbed-node-4] 2026-04-11 06:53:54.234498 | orchestrator | changed: [testbed-node-5] 2026-04-11 06:53:54.234504 | orchestrator | 2026-04-11 06:53:54.234511 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 06:53:54.234518 | orchestrator | Saturday 11 April 2026 06:52:51 +0000 (0:00:02.074) 0:00:08.084 ******** 2026-04-11 06:53:54.234524 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-11 06:53:54.234532 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-11 06:53:54.234539 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-11 06:53:54.234545 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-11 06:53:54.234552 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-11 06:53:54.234559 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-11 06:53:54.234565 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-11 06:53:54.234572 | orchestrator | 2026-04-11 06:53:54.234578 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-11 06:53:54.234585 | orchestrator | 2026-04-11 06:53:54.234592 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-11 06:53:54.234599 | orchestrator | Saturday 11 April 2026 06:52:54 +0000 (0:00:02.965) 0:00:11.050 ******** 2026-04-11 06:53:54.234605 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:53:54.234612 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:53:54.234619 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:53:54.234625 | orchestrator | 2026-04-11 06:53:54.234632 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-11 06:53:54.234639 | orchestrator | Saturday 11 April 2026 06:52:57 +0000 (0:00:02.519) 0:00:13.569 ******** 2026-04-11 06:53:54.234645 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:53:54.234652 | orchestrator | 2026-04-11 06:53:54.234659 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-11 06:53:54.234666 | orchestrator | Saturday 11 April 2026 06:53:00 +0000 (0:00:02.928) 0:00:16.498 ******** 2026-04-11 06:53:54.234673 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:53:54.234680 | orchestrator | 2026-04-11 06:53:54.234687 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-11 06:53:54.234694 | orchestrator | Saturday 11 April 2026 06:53:02 +0000 (0:00:01.933) 0:00:18.432 ******** 2026-04-11 06:53:54.234701 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:53:54.234707 | orchestrator | 2026-04-11 06:53:54.234714 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-11 06:53:54.234727 | orchestrator | Saturday 11 April 2026 06:53:04 +0000 (0:00:02.048) 0:00:20.480 ******** 2026-04-11 06:53:54.234734 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:53:54.234740 | orchestrator | 2026-04-11 06:53:54.234747 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-11 06:53:54.234754 | orchestrator | Saturday 11 April 2026 06:53:08 +0000 (0:00:04.106) 0:00:24.587 ******** 2026-04-11 06:53:54.234760 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:53:54.234786 | orchestrator | 2026-04-11 06:53:54.234795 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-11 06:53:54.234802 | orchestrator | 2026-04-11 06:53:54.234810 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-11 06:53:54.234818 | orchestrator | Saturday 11 April 2026 06:53:27 +0000 (0:00:19.159) 0:00:43.747 ******** 2026-04-11 06:53:54.234826 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:53:54.234833 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:53:54.234841 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:53:54.234849 | orchestrator | 2026-04-11 06:53:54.234856 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-11 06:53:54.234864 | orchestrator | Saturday 11 April 2026 06:53:29 +0000 (0:00:01.566) 0:00:45.313 ******** 2026-04-11 06:53:54.234872 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:53:54.234879 | orchestrator | 2026-04-11 06:53:54.234887 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-11 06:53:54.234895 | orchestrator | Saturday 11 April 2026 06:53:31 +0000 (0:00:02.019) 0:00:47.333 ******** 2026-04-11 06:53:54.234903 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:53:54.234910 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:53:54.234918 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:53:54.234926 | orchestrator | 2026-04-11 06:53:54.234933 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-11 06:53:54.234941 | orchestrator | Saturday 11 April 2026 06:53:32 +0000 (0:00:01.499) 0:00:48.832 ******** 2026-04-11 06:53:54.234948 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:53:54.234956 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:53:54.234964 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:53:54.234971 | orchestrator | 2026-04-11 06:53:54.234995 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-11 06:53:54.235004 | orchestrator | Saturday 11 April 2026 06:53:34 +0000 (0:00:01.910) 0:00:50.743 ******** 2026-04-11 06:53:54.235012 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:53:54.235019 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:53:54.235027 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:53:54.235034 | orchestrator | 2026-04-11 06:53:54.235041 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-11 06:53:54.235049 | orchestrator | Saturday 11 April 2026 06:53:38 +0000 (0:00:03.597) 0:00:54.341 ******** 2026-04-11 06:53:54.235057 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:53:54.235064 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:53:54.235072 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:53:54.235080 | orchestrator | 2026-04-11 06:53:54.235087 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-11 06:53:54.235095 | orchestrator | 2026-04-11 06:53:54.235102 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 06:53:54.235110 | orchestrator | Saturday 11 April 2026 06:53:50 +0000 (0:00:12.703) 0:01:07.045 ******** 2026-04-11 06:53:54.235118 | orchestrator | included: /ansible/roles/nova/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:53:54.235127 | orchestrator | 2026-04-11 06:53:54.235134 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-11 06:53:54.235141 | orchestrator | Saturday 11 April 2026 06:53:52 +0000 (0:00:01.920) 0:01:08.965 ******** 2026-04-11 06:53:54.235152 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:53:54.235169 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:53:54.235186 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:05.756727 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:05.757009 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:05.757029 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:05.757040 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:05.757080 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:05.757092 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:05.757110 | orchestrator | 2026-04-11 06:54:05.757120 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-11 06:54:05.757130 | orchestrator | Saturday 11 April 2026 06:53:56 +0000 (0:00:03.383) 0:01:12.349 ******** 2026-04-11 06:54:05.757139 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:05.757149 | orchestrator | 2026-04-11 06:54:05.757158 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-11 06:54:05.757166 | orchestrator | Saturday 11 April 2026 06:53:57 +0000 (0:00:01.165) 0:01:13.514 ******** 2026-04-11 06:54:05.757175 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:05.757184 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:54:05.757192 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:54:05.757201 | orchestrator | 2026-04-11 06:54:05.757209 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-11 06:54:05.757218 | orchestrator | Saturday 11 April 2026 06:53:59 +0000 (0:00:01.648) 0:01:15.163 ******** 2026-04-11 06:54:05.757227 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 06:54:05.757235 | orchestrator | 2026-04-11 06:54:05.757244 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-11 06:54:05.757253 | orchestrator | Saturday 11 April 2026 06:54:01 +0000 (0:00:02.076) 0:01:17.240 ******** 2026-04-11 06:54:05.757262 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:05.757272 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:54:05.757283 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:54:05.757293 | orchestrator | 2026-04-11 06:54:05.757303 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-11 06:54:05.757313 | orchestrator | Saturday 11 April 2026 06:54:02 +0000 (0:00:01.335) 0:01:18.576 ******** 2026-04-11 06:54:05.757323 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:54:05.757334 | orchestrator | 2026-04-11 06:54:05.757344 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-11 06:54:05.757355 | orchestrator | Saturday 11 April 2026 06:54:04 +0000 (0:00:01.876) 0:01:20.452 ******** 2026-04-11 06:54:05.757366 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:05.757389 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:09.222193 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:09.222297 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:09.222314 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:09.222365 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:09.222415 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:09.222436 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:09.222452 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:09.222471 | orchestrator | 2026-04-11 06:54:09.222490 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-11 06:54:09.222501 | orchestrator | Saturday 11 April 2026 06:54:08 +0000 (0:00:04.425) 0:01:24.877 ******** 2026-04-11 06:54:09.222513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:09.222547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:11.015538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:11.016500 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:11.016538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:11.016554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:11.016567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:11.016602 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:54:11.016675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:11.016691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:11.016703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:11.016715 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:54:11.016726 | orchestrator | 2026-04-11 06:54:11.016739 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-11 06:54:11.016751 | orchestrator | Saturday 11 April 2026 06:54:10 +0000 (0:00:01.793) 0:01:26.671 ******** 2026-04-11 06:54:11.016768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:11.016825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:14.199652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:14.199753 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:14.199771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:14.199829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:14.199878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:14.199890 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:54:14.199920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:14.199932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:14.199943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:14.199960 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:54:14.199970 | orchestrator | 2026-04-11 06:54:14.199981 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-11 06:54:14.199992 | orchestrator | Saturday 11 April 2026 06:54:12 +0000 (0:00:02.209) 0:01:28.881 ******** 2026-04-11 06:54:14.200008 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:14.200027 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:20.551412 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:20.551517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:20.551569 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:20.551635 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:20.551653 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:20.551664 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:20.551683 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:20.551694 | orchestrator | 2026-04-11 06:54:20.551706 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-11 06:54:20.551717 | orchestrator | Saturday 11 April 2026 06:54:17 +0000 (0:00:04.369) 0:01:33.251 ******** 2026-04-11 06:54:20.551732 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:20.551751 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:27.573410 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:27.573566 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:27.573614 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:27.573649 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:54:27.573664 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:27.573686 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:27.573698 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:54:27.573710 | orchestrator | 2026-04-11 06:54:27.573723 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-11 06:54:27.573736 | orchestrator | Saturday 11 April 2026 06:54:27 +0000 (0:00:09.983) 0:01:43.235 ******** 2026-04-11 06:54:27.573754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:27.573775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:39.687039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:39.687183 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:39.687204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:39.687233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:39.687248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:39.687260 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:54:39.687292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:39.687314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:54:39.687327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:54:39.687338 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:54:39.687350 | orchestrator | 2026-04-11 06:54:39.687362 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-11 06:54:39.687380 | orchestrator | Saturday 11 April 2026 06:54:29 +0000 (0:00:02.006) 0:01:45.241 ******** 2026-04-11 06:54:39.687391 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:39.687417 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:54:39.687428 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:54:39.687439 | orchestrator | 2026-04-11 06:54:39.687450 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-11 06:54:39.687461 | orchestrator | Saturday 11 April 2026 06:54:31 +0000 (0:00:02.121) 0:01:47.363 ******** 2026-04-11 06:54:39.687471 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:39.687482 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:54:39.687493 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:54:39.687503 | orchestrator | 2026-04-11 06:54:39.687514 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-11 06:54:39.687525 | orchestrator | Saturday 11 April 2026 06:54:32 +0000 (0:00:01.721) 0:01:49.084 ******** 2026-04-11 06:54:39.687536 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-11 06:54:39.687547 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-11 06:54:39.687558 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:54:39.687569 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-11 06:54:39.687580 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-11 06:54:39.687590 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:54:39.687601 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-11 06:54:39.687612 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-11 06:54:39.687622 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:54:39.687633 | orchestrator | 2026-04-11 06:54:39.687644 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-11 06:54:39.687662 | orchestrator | Saturday 11 April 2026 06:54:34 +0000 (0:00:01.392) 0:01:50.477 ******** 2026-04-11 06:54:39.687673 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-11 06:54:39.687685 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-11 06:54:39.687696 | orchestrator | 2026-04-11 06:54:39.687707 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-11 06:54:39.687718 | orchestrator | Saturday 11 April 2026 06:54:37 +0000 (0:00:03.012) 0:01:53.489 ******** 2026-04-11 06:54:39.687729 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:54:39.687740 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:54:39.687751 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:54:39.687761 | orchestrator | 2026-04-11 06:55:04.562666 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-11 06:55:04.562783 | orchestrator | Saturday 11 April 2026 06:54:40 +0000 (0:00:03.224) 0:01:56.714 ******** 2026-04-11 06:55:04.562799 | orchestrator | ok: [testbed-node-0] 2026-04-11 06:55:04.562868 | orchestrator | ok: [testbed-node-1] 2026-04-11 06:55:04.562879 | orchestrator | ok: [testbed-node-2] 2026-04-11 06:55:04.562890 | orchestrator | 2026-04-11 06:55:04.562902 | orchestrator | TASK [nova : Run Nova upgrade checks] ****************************************** 2026-04-11 06:55:04.562914 | orchestrator | Saturday 11 April 2026 06:54:44 +0000 (0:00:03.576) 0:02:00.290 ******** 2026-04-11 06:55:04.562925 | orchestrator | changed: [testbed-node-0] 2026-04-11 06:55:04.562937 | orchestrator | 2026-04-11 06:55:04.562949 | orchestrator | TASK [nova : Upgrade status check result] ************************************** 2026-04-11 06:55:04.562960 | orchestrator | Saturday 11 April 2026 06:55:02 +0000 (0:00:17.942) 0:02:18.233 ******** 2026-04-11 06:55:04.562971 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:55:04.562982 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:55:04.562993 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:55:04.563004 | orchestrator | 2026-04-11 06:55:04.563015 | orchestrator | TASK [nova : Stopping top level nova services] ********************************* 2026-04-11 06:55:04.563026 | orchestrator | Saturday 11 April 2026 06:55:03 +0000 (0:00:01.433) 0:02:19.667 ******** 2026-04-11 06:55:04.563044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:04.563077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:04.563114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:55:04.563127 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:55:04.563159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:04.563173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:04.563193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:55:04.563214 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:55:04.563228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:04.563252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:09.835778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:55:09.835972 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:55:09.836004 | orchestrator | 2026-04-11 06:55:09.836027 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-11 06:55:09.836048 | orchestrator | Saturday 11 April 2026 06:55:06 +0000 (0:00:02.483) 0:02:22.150 ******** 2026-04-11 06:55:09.836090 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:55:09.836141 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:55:09.836168 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:55:09.836216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:55:09.836250 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:55:09.836288 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 06:55:09.836311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:55:09.836344 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:55:13.383355 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 06:55:13.383484 | orchestrator | 2026-04-11 06:55:13.383503 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-11 06:55:13.383516 | orchestrator | Saturday 11 April 2026 06:55:10 +0000 (0:00:04.949) 0:02:27.100 ******** 2026-04-11 06:55:13.383529 | orchestrator | ok: [testbed-node-0] => { 2026-04-11 06:55:13.383541 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:55:13.383553 | orchestrator | } 2026-04-11 06:55:13.383564 | orchestrator | ok: [testbed-node-1] => { 2026-04-11 06:55:13.383575 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:55:13.383609 | orchestrator | } 2026-04-11 06:55:13.383621 | orchestrator | ok: [testbed-node-2] => { 2026-04-11 06:55:13.383632 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:55:13.383642 | orchestrator | } 2026-04-11 06:55:13.383653 | orchestrator | 2026-04-11 06:55:13.383664 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 06:55:13.383675 | orchestrator | Saturday 11 April 2026 06:55:12 +0000 (0:00:01.403) 0:02:28.503 ******** 2026-04-11 06:55:13.383703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:13.383720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:13.383734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:55:13.383747 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:55:13.383778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:13.383806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:13.383846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:55:13.383858 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:55:13.383870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:13.383893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 06:55:56.783158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 06:55:56.783279 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:55:56.783297 | orchestrator | 2026-04-11 06:55:56.783310 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 06:55:56.783339 | orchestrator | Saturday 11 April 2026 06:55:14 +0000 (0:00:02.303) 0:02:30.807 ******** 2026-04-11 06:55:56.783353 | orchestrator | 2026-04-11 06:55:56.783364 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 06:55:56.783376 | orchestrator | Saturday 11 April 2026 06:55:15 +0000 (0:00:00.524) 0:02:31.331 ******** 2026-04-11 06:55:56.783388 | orchestrator | 2026-04-11 06:55:56.783400 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-11 06:55:56.783412 | orchestrator | Saturday 11 April 2026 06:55:15 +0000 (0:00:00.508) 0:02:31.840 ******** 2026-04-11 06:55:56.783423 | orchestrator | 2026-04-11 06:55:56.783435 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-11 06:55:56.783447 | orchestrator | 2026-04-11 06:55:56.783458 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 06:55:56.783470 | orchestrator | Saturday 11 April 2026 06:55:17 +0000 (0:00:01.480) 0:02:33.321 ******** 2026-04-11 06:55:56.783483 | orchestrator | included: /ansible/roles/nova-cell/tasks/upgrade.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:55:56.783496 | orchestrator | 2026-04-11 06:55:56.783507 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-11 06:55:56.783519 | orchestrator | Saturday 11 April 2026 06:55:19 +0000 (0:00:02.703) 0:02:36.025 ******** 2026-04-11 06:55:56.783531 | orchestrator | changed: [testbed-node-3] 2026-04-11 06:55:56.783543 | orchestrator | 2026-04-11 06:55:56.783562 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-11 06:55:56.783581 | orchestrator | Saturday 11 April 2026 06:55:24 +0000 (0:00:04.413) 0:02:40.438 ******** 2026-04-11 06:55:56.783600 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:55:56.783618 | orchestrator | 2026-04-11 06:55:56.783636 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-11 06:55:56.783654 | orchestrator | Saturday 11 April 2026 06:55:26 +0000 (0:00:02.339) 0:02:42.778 ******** 2026-04-11 06:55:56.783673 | orchestrator | included: service-image-info for testbed-node-3 2026-04-11 06:55:56.783689 | orchestrator | 2026-04-11 06:55:56.783708 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-11 06:55:56.783726 | orchestrator | Saturday 11 April 2026 06:55:28 +0000 (0:00:02.075) 0:02:44.854 ******** 2026-04-11 06:55:56.783745 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:55:56.783763 | orchestrator | 2026-04-11 06:55:56.783783 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-11 06:55:56.783803 | orchestrator | Saturday 11 April 2026 06:55:33 +0000 (0:00:04.375) 0:02:49.229 ******** 2026-04-11 06:55:56.783816 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:55:56.783828 | orchestrator | 2026-04-11 06:55:56.783868 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-11 06:55:56.783904 | orchestrator | Saturday 11 April 2026 06:55:36 +0000 (0:00:03.070) 0:02:52.299 ******** 2026-04-11 06:55:56.783916 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:55:56.783927 | orchestrator | 2026-04-11 06:55:56.783938 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-11 06:55:56.783948 | orchestrator | Saturday 11 April 2026 06:55:39 +0000 (0:00:03.072) 0:02:55.371 ******** 2026-04-11 06:55:56.783959 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:55:56.783970 | orchestrator | 2026-04-11 06:55:56.783980 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-11 06:55:56.783991 | orchestrator | Saturday 11 April 2026 06:55:42 +0000 (0:00:03.029) 0:02:58.401 ******** 2026-04-11 06:55:56.784002 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:55:56.784013 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:55:56.784024 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:55:56.784035 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:55:56.784046 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:55:56.784056 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:55:56.784067 | orchestrator | 2026-04-11 06:55:56.784078 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-11 06:55:56.784089 | orchestrator | Saturday 11 April 2026 06:55:47 +0000 (0:00:05.448) 0:03:03.849 ******** 2026-04-11 06:55:56.784100 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:55:56.784111 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:55:56.784122 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:55:56.784133 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:55:56.784144 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:55:56.784155 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:55:56.784165 | orchestrator | 2026-04-11 06:55:56.784176 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-11 06:55:56.784187 | orchestrator | Saturday 11 April 2026 06:55:52 +0000 (0:00:04.892) 0:03:08.742 ******** 2026-04-11 06:55:56.784198 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:55:56.784209 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:55:56.784219 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:55:56.784230 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:55:56.784241 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:55:56.784271 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:55:56.784283 | orchestrator | 2026-04-11 06:55:56.784294 | orchestrator | TASK [nova-cell : Stopping nova cell services] ********************************* 2026-04-11 06:55:56.784305 | orchestrator | Saturday 11 April 2026 06:55:55 +0000 (0:00:03.178) 0:03:11.921 ******** 2026-04-11 06:55:56.784325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:55:56.784340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:55:56.784361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:55:56.784374 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:55:56.784386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:55:56.784398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:55:56.784418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:56:07.830535 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:56:07.830652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:56:07.830686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:56:07.830697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:56:07.830705 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:56:07.830713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:07.830722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:07.830733 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:56:07.830758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:07.830766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:07.830779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:07.830787 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:56:07.830793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:07.830801 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:56:07.830809 | orchestrator | 2026-04-11 06:56:07.830817 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-11 06:56:07.830825 | orchestrator | Saturday 11 April 2026 06:55:59 +0000 (0:00:03.285) 0:03:15.206 ******** 2026-04-11 06:56:07.830832 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:56:07.830898 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:56:07.830909 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:56:07.830917 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:56:07.830925 | orchestrator | 2026-04-11 06:56:07.830933 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-11 06:56:07.830940 | orchestrator | Saturday 11 April 2026 06:56:01 +0000 (0:00:02.435) 0:03:17.641 ******** 2026-04-11 06:56:07.830948 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-11 06:56:07.830955 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-11 06:56:07.830963 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-11 06:56:07.830970 | orchestrator | 2026-04-11 06:56:07.830977 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-11 06:56:07.830985 | orchestrator | Saturday 11 April 2026 06:56:03 +0000 (0:00:01.918) 0:03:19.560 ******** 2026-04-11 06:56:07.830991 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-11 06:56:07.830998 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-11 06:56:07.831006 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-11 06:56:07.831012 | orchestrator | 2026-04-11 06:56:07.831019 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-11 06:56:07.831027 | orchestrator | Saturday 11 April 2026 06:56:05 +0000 (0:00:02.280) 0:03:21.841 ******** 2026-04-11 06:56:07.831034 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-11 06:56:07.831042 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:56:07.831049 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-11 06:56:07.831056 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:56:07.831063 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-11 06:56:07.831076 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:56:07.831084 | orchestrator | 2026-04-11 06:56:07.831092 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-11 06:56:07.831100 | orchestrator | Saturday 11 April 2026 06:56:07 +0000 (0:00:01.377) 0:03:23.218 ******** 2026-04-11 06:56:07.831108 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 06:56:07.831116 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 06:56:07.831124 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:56:07.831139 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 06:56:16.369782 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 06:56:16.369946 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 06:56:16.369963 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 06:56:16.369975 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:56:16.369988 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-11 06:56:16.369999 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-11 06:56:16.370010 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-11 06:56:16.370079 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:56:16.370091 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 06:56:16.370103 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 06:56:16.370114 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-11 06:56:16.370125 | orchestrator | 2026-04-11 06:56:16.370137 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-11 06:56:16.370149 | orchestrator | Saturday 11 April 2026 06:56:09 +0000 (0:00:02.369) 0:03:25.587 ******** 2026-04-11 06:56:16.370160 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:56:16.370171 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:56:16.370182 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:56:16.370193 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:56:16.370209 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:56:16.370227 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:56:16.370241 | orchestrator | 2026-04-11 06:56:16.370252 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-11 06:56:16.370263 | orchestrator | Saturday 11 April 2026 06:56:11 +0000 (0:00:02.190) 0:03:27.778 ******** 2026-04-11 06:56:16.370274 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:56:16.370285 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:56:16.370298 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:56:16.370310 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:56:16.370323 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:56:16.370335 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:56:16.370349 | orchestrator | 2026-04-11 06:56:16.370362 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-11 06:56:16.370375 | orchestrator | Saturday 11 April 2026 06:56:14 +0000 (0:00:02.554) 0:03:30.333 ******** 2026-04-11 06:56:16.370391 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:56:16.370429 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:56:16.370444 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:56:16.370486 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:16.370501 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:56:16.370515 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:56:16.370536 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:56:16.370550 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:56:16.370571 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256289 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256393 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256409 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256438 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256449 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256460 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256470 | orchestrator | 2026-04-11 06:56:22.256502 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 06:56:22.256515 | orchestrator | Saturday 11 April 2026 06:56:17 +0000 (0:00:03.769) 0:03:34.103 ******** 2026-04-11 06:56:22.256526 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 06:56:22.256537 | orchestrator | 2026-04-11 06:56:22.256547 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-11 06:56:22.256557 | orchestrator | Saturday 11 April 2026 06:56:20 +0000 (0:00:02.188) 0:03:36.291 ******** 2026-04-11 06:56:22.256567 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256579 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256596 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256607 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:56:22.256629 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944749 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944782 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944790 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944797 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944805 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944840 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944849 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944913 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944928 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:56:25.944936 | orchestrator | 2026-04-11 06:56:25.944945 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-11 06:56:25.944954 | orchestrator | Saturday 11 April 2026 06:56:24 +0000 (0:00:04.504) 0:03:40.795 ******** 2026-04-11 06:56:25.944963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:56:25.944983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:56:26.804513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:56:26.804648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:56:26.804667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:56:26.804680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:26.804708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:56:26.804722 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:56:26.804753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:56:26.804766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:26.804786 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:56:26.804798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:56:26.804809 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:56:26.804820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:56:26.804832 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:56:26.804843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:26.804924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:26.804937 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:56:26.804958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:29.982670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:29.982781 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:56:29.982799 | orchestrator | 2026-04-11 06:56:29.982811 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-11 06:56:29.982823 | orchestrator | Saturday 11 April 2026 06:56:27 +0000 (0:00:03.271) 0:03:44.067 ******** 2026-04-11 06:56:29.982837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:56:29.982851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:56:29.982956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:56:29.982986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:56:29.983022 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:56:29.983055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:56:29.983068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:56:29.983080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:56:29.983092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:56:29.983103 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:56:29.983120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:56:29.983141 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:56:29.983161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:59.893786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:59.893979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:59.894000 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:56:59.894095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:59.894110 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:56:59.894122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:56:59.894149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:56:59.894186 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:56:59.894198 | orchestrator | 2026-04-11 06:56:59.894211 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-11 06:56:59.894223 | orchestrator | Saturday 11 April 2026 06:56:31 +0000 (0:00:03.941) 0:03:48.008 ******** 2026-04-11 06:56:59.894234 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:56:59.894245 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:56:59.894256 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:56:59.894267 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:56:59.894278 | orchestrator | 2026-04-11 06:56:59.894289 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-11 06:56:59.894300 | orchestrator | Saturday 11 April 2026 06:56:34 +0000 (0:00:02.302) 0:03:50.310 ******** 2026-04-11 06:56:59.894311 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 06:56:59.894323 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 06:56:59.894336 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 06:56:59.894348 | orchestrator | 2026-04-11 06:56:59.894360 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-11 06:56:59.894401 | orchestrator | Saturday 11 April 2026 06:56:36 +0000 (0:00:02.070) 0:03:52.381 ******** 2026-04-11 06:56:59.894422 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 06:56:59.894443 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 06:56:59.894463 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 06:56:59.894484 | orchestrator | 2026-04-11 06:56:59.894504 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-11 06:56:59.894523 | orchestrator | Saturday 11 April 2026 06:56:38 +0000 (0:00:02.059) 0:03:54.440 ******** 2026-04-11 06:56:59.894542 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:56:59.894562 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:56:59.894581 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:56:59.894601 | orchestrator | 2026-04-11 06:56:59.894621 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-11 06:56:59.894641 | orchestrator | Saturday 11 April 2026 06:56:40 +0000 (0:00:01.731) 0:03:56.171 ******** 2026-04-11 06:56:59.894659 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:56:59.894678 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:56:59.894696 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:56:59.894714 | orchestrator | 2026-04-11 06:56:59.894732 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-11 06:56:59.894750 | orchestrator | Saturday 11 April 2026 06:56:41 +0000 (0:00:01.570) 0:03:57.743 ******** 2026-04-11 06:56:59.894769 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-11 06:56:59.894789 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-11 06:56:59.894904 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-11 06:56:59.894926 | orchestrator | 2026-04-11 06:56:59.894946 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-11 06:56:59.894965 | orchestrator | Saturday 11 April 2026 06:56:43 +0000 (0:00:02.253) 0:03:59.996 ******** 2026-04-11 06:56:59.894984 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-11 06:56:59.895004 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-11 06:56:59.895023 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-11 06:56:59.895041 | orchestrator | 2026-04-11 06:56:59.895053 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-11 06:56:59.895064 | orchestrator | Saturday 11 April 2026 06:56:46 +0000 (0:00:02.262) 0:04:02.259 ******** 2026-04-11 06:56:59.895088 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-11 06:56:59.895099 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-11 06:56:59.895110 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-11 06:56:59.895120 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-11 06:56:59.895131 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-11 06:56:59.895141 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-11 06:56:59.895152 | orchestrator | 2026-04-11 06:56:59.895163 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-11 06:56:59.895174 | orchestrator | Saturday 11 April 2026 06:56:51 +0000 (0:00:05.119) 0:04:07.379 ******** 2026-04-11 06:56:59.895184 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:56:59.895195 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:56:59.895205 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:56:59.895216 | orchestrator | 2026-04-11 06:56:59.895227 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-11 06:56:59.895237 | orchestrator | Saturday 11 April 2026 06:56:52 +0000 (0:00:01.413) 0:04:08.793 ******** 2026-04-11 06:56:59.895248 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:56:59.895259 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:56:59.895270 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:56:59.895281 | orchestrator | 2026-04-11 06:56:59.895291 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-11 06:56:59.895302 | orchestrator | Saturday 11 April 2026 06:56:54 +0000 (0:00:01.437) 0:04:10.230 ******** 2026-04-11 06:56:59.895313 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:56:59.895324 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:56:59.895334 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:56:59.895345 | orchestrator | 2026-04-11 06:56:59.895356 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-11 06:56:59.895366 | orchestrator | Saturday 11 April 2026 06:56:56 +0000 (0:00:02.592) 0:04:12.823 ******** 2026-04-11 06:56:59.895386 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-11 06:56:59.895399 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-11 06:56:59.895409 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-11 06:56:59.895420 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-11 06:56:59.895431 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-11 06:56:59.895442 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-11 06:56:59.895452 | orchestrator | 2026-04-11 06:56:59.895463 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-11 06:57:21.216290 | orchestrator | Saturday 11 April 2026 06:57:00 +0000 (0:00:04.190) 0:04:17.013 ******** 2026-04-11 06:57:21.216395 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-11 06:57:21.216408 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-11 06:57:21.216416 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-11 06:57:21.216424 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-11 06:57:21.216454 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:57:21.216464 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-11 06:57:21.216473 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:57:21.216482 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-11 06:57:21.216490 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:57:21.216499 | orchestrator | 2026-04-11 06:57:21.216508 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-11 06:57:21.216517 | orchestrator | Saturday 11 April 2026 06:57:05 +0000 (0:00:04.363) 0:04:21.377 ******** 2026-04-11 06:57:21.216525 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:57:21.216534 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:57:21.216543 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:57:21.216552 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 06:57:21.216561 | orchestrator | 2026-04-11 06:57:21.216570 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-11 06:57:21.216578 | orchestrator | Saturday 11 April 2026 06:57:08 +0000 (0:00:03.365) 0:04:24.742 ******** 2026-04-11 06:57:21.216587 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 06:57:21.216595 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 06:57:21.216603 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 06:57:21.216611 | orchestrator | 2026-04-11 06:57:21.216619 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-11 06:57:21.216627 | orchestrator | Saturday 11 April 2026 06:57:10 +0000 (0:00:01.999) 0:04:26.741 ******** 2026-04-11 06:57:21.216635 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:57:21.216644 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:57:21.216652 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:57:21.216661 | orchestrator | 2026-04-11 06:57:21.216669 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-11 06:57:21.216678 | orchestrator | Saturday 11 April 2026 06:57:11 +0000 (0:00:01.329) 0:04:28.071 ******** 2026-04-11 06:57:21.216686 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:57:21.216695 | orchestrator | 2026-04-11 06:57:21.216704 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-11 06:57:21.216712 | orchestrator | Saturday 11 April 2026 06:57:13 +0000 (0:00:01.151) 0:04:29.222 ******** 2026-04-11 06:57:21.216721 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:57:21.216729 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:57:21.216738 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:57:21.216746 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:57:21.216755 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:57:21.216764 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:57:21.216772 | orchestrator | 2026-04-11 06:57:21.216781 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-11 06:57:21.216790 | orchestrator | Saturday 11 April 2026 06:57:14 +0000 (0:00:01.853) 0:04:31.076 ******** 2026-04-11 06:57:21.216799 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 06:57:21.216807 | orchestrator | 2026-04-11 06:57:21.216815 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-11 06:57:21.216824 | orchestrator | Saturday 11 April 2026 06:57:16 +0000 (0:00:01.802) 0:04:32.879 ******** 2026-04-11 06:57:21.216833 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:57:21.216842 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:57:21.216850 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:57:21.216859 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:57:21.216868 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:57:21.216877 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:57:21.216908 | orchestrator | 2026-04-11 06:57:21.216917 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-11 06:57:21.216925 | orchestrator | Saturday 11 April 2026 06:57:18 +0000 (0:00:01.925) 0:04:34.805 ******** 2026-04-11 06:57:21.216951 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:57:21.216991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:57:21.217001 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:57:21.217011 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:57:21.217021 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:57:21.217034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:57:21.217048 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:57:21.217065 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:57:24.536180 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:57:24.536289 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:24.536306 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:24.536319 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:24.536370 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:24.536384 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:24.536414 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:24.536427 | orchestrator | 2026-04-11 06:57:24.536441 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-11 06:57:24.536453 | orchestrator | Saturday 11 April 2026 06:57:23 +0000 (0:00:04.502) 0:04:39.308 ******** 2026-04-11 06:57:24.536465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:57:24.536478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:57:24.536516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:57:24.536530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:57:24.536552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:57:36.737449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:57:36.737563 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737617 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737632 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737643 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737673 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737686 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737723 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737736 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:57:36.737748 | orchestrator | 2026-04-11 06:57:36.737762 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-11 06:57:36.737774 | orchestrator | Saturday 11 April 2026 06:57:31 +0000 (0:00:08.036) 0:04:47.344 ******** 2026-04-11 06:57:36.737786 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:57:36.737797 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:57:36.737808 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:57:36.737819 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:57:36.737829 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:57:36.737840 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:57:36.737851 | orchestrator | 2026-04-11 06:57:36.737862 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-11 06:57:36.737873 | orchestrator | Saturday 11 April 2026 06:57:34 +0000 (0:00:02.964) 0:04:50.308 ******** 2026-04-11 06:57:36.737883 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 06:57:36.737920 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 06:57:36.737931 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 06:57:36.737942 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-11 06:57:36.737953 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 06:57:36.737964 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 06:57:36.737976 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:57:36.737988 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-11 06:57:36.738001 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 06:57:36.738014 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:57:36.738093 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-11 06:58:07.035575 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.035684 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 06:58:07.035719 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 06:58:07.035727 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-11 06:58:07.035735 | orchestrator | 2026-04-11 06:58:07.035743 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-11 06:58:07.035750 | orchestrator | Saturday 11 April 2026 06:57:39 +0000 (0:00:04.813) 0:04:55.122 ******** 2026-04-11 06:58:07.035758 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:58:07.035765 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:58:07.035772 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:58:07.035779 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:07.035786 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:07.035793 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.035800 | orchestrator | 2026-04-11 06:58:07.035807 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-11 06:58:07.035814 | orchestrator | Saturday 11 April 2026 06:57:40 +0000 (0:00:01.739) 0:04:56.862 ******** 2026-04-11 06:58:07.035822 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 06:58:07.035830 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 06:58:07.035836 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-11 06:58:07.035844 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 06:58:07.035852 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 06:58:07.035859 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-11 06:58:07.035866 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 06:58:07.035887 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 06:58:07.035894 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-11 06:58:07.036053 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 06:58:07.036066 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:07.036073 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 06:58:07.036081 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:07.036088 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:58:07.036096 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:58:07.036103 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-11 06:58:07.036111 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.036118 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:58:07.036127 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:58:07.036135 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:58:07.036144 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-11 06:58:07.036152 | orchestrator | 2026-04-11 06:58:07.036161 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-11 06:58:07.036178 | orchestrator | Saturday 11 April 2026 06:57:47 +0000 (0:00:06.585) 0:05:03.447 ******** 2026-04-11 06:58:07.036187 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 06:58:07.036195 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 06:58:07.036217 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-11 06:58:07.036226 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:58:07.036234 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 06:58:07.036242 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 06:58:07.036251 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:58:07.036259 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-11 06:58:07.036267 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 06:58:07.036291 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 06:58:07.036300 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-11 06:58:07.036307 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-11 06:58:07.036316 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 06:58:07.036324 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:07.036332 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 06:58:07.036340 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:07.036349 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:58:07.036357 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-11 06:58:07.036365 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.036372 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:58:07.036378 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-11 06:58:07.036384 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:58:07.036390 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:58:07.036397 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-11 06:58:07.036405 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:58:07.036413 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:58:07.036419 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-11 06:58:07.036425 | orchestrator | 2026-04-11 06:58:07.036432 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-11 06:58:07.036438 | orchestrator | Saturday 11 April 2026 06:57:55 +0000 (0:00:08.517) 0:05:11.965 ******** 2026-04-11 06:58:07.036444 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:58:07.036451 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:58:07.036460 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:58:07.036467 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:07.036476 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:07.036490 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.036498 | orchestrator | 2026-04-11 06:58:07.036506 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-11 06:58:07.036513 | orchestrator | Saturday 11 April 2026 06:57:57 +0000 (0:00:01.772) 0:05:13.737 ******** 2026-04-11 06:58:07.036526 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:58:07.036533 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:58:07.036540 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:58:07.036548 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:07.036555 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:07.036562 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.036570 | orchestrator | 2026-04-11 06:58:07.036577 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-11 06:58:07.036584 | orchestrator | Saturday 11 April 2026 06:57:59 +0000 (0:00:02.031) 0:05:15.769 ******** 2026-04-11 06:58:07.036591 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.036597 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:07.036604 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:07.036610 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:58:07.036616 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:58:07.036622 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:58:07.036628 | orchestrator | 2026-04-11 06:58:07.036635 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-11 06:58:07.036641 | orchestrator | Saturday 11 April 2026 06:58:02 +0000 (0:00:02.948) 0:05:18.717 ******** 2026-04-11 06:58:07.036647 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:07.036654 | orchestrator | ok: [testbed-node-3] 2026-04-11 06:58:07.036661 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:07.036668 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.036675 | orchestrator | ok: [testbed-node-4] 2026-04-11 06:58:07.036681 | orchestrator | ok: [testbed-node-5] 2026-04-11 06:58:07.036687 | orchestrator | 2026-04-11 06:58:07.036693 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-11 06:58:07.036699 | orchestrator | Saturday 11 April 2026 06:58:05 +0000 (0:00:03.247) 0:05:21.965 ******** 2026-04-11 06:58:07.036708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:58:07.036727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:58:07.912389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:58:07.912504 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:58:07.912531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:58:07.912541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:58:07.912549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:58:07.912555 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:58:07.912562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:58:07.912584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:58:07.912599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:58:07.912606 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:58:07.912614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:58:07.912621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:58:07.912629 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:07.912636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:58:07.912642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:58:07.912648 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:07.912660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:58:13.938488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:58:13.938600 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:13.938622 | orchestrator | 2026-04-11 06:58:13.938664 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-11 06:58:13.938686 | orchestrator | Saturday 11 April 2026 06:58:09 +0000 (0:00:03.352) 0:05:25.317 ******** 2026-04-11 06:58:13.938706 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-11 06:58:13.938725 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-11 06:58:13.938744 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:58:13.938761 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-11 06:58:13.938781 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-11 06:58:13.938800 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:58:13.938819 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-11 06:58:13.938839 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-11 06:58:13.938859 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:58:13.938878 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-11 06:58:13.938898 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-11 06:58:13.938909 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:13.938982 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-11 06:58:13.938994 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-11 06:58:13.939005 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:13.939016 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-11 06:58:13.939027 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-11 06:58:13.939038 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:13.939049 | orchestrator | 2026-04-11 06:58:13.939060 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-11 06:58:13.939071 | orchestrator | Saturday 11 April 2026 06:58:11 +0000 (0:00:02.066) 0:05:27.384 ******** 2026-04-11 06:58:13.939084 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:58:13.939119 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:58:13.939155 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-11 06:58:13.939176 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:58:13.939189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:58:13.939201 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:58:13.939214 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:58:13.939232 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-11 06:58:13.939252 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-11 06:58:19.277141 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:58:19.277285 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:58:19.277315 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:58:19.277337 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:58:19.277388 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-11 06:58:19.277435 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 06:58:19.277457 | orchestrator | 2026-04-11 06:58:19.277479 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-11 06:58:19.277499 | orchestrator | Saturday 11 April 2026 06:58:16 +0000 (0:00:05.004) 0:05:32.388 ******** 2026-04-11 06:58:19.277520 | orchestrator | ok: [testbed-node-3] => { 2026-04-11 06:58:19.277541 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:58:19.277561 | orchestrator | } 2026-04-11 06:58:19.277581 | orchestrator | ok: [testbed-node-4] => { 2026-04-11 06:58:19.277601 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:58:19.277620 | orchestrator | } 2026-04-11 06:58:19.277640 | orchestrator | ok: [testbed-node-5] => { 2026-04-11 06:58:19.277660 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:58:19.277688 | orchestrator | } 2026-04-11 06:58:19.277709 | orchestrator | ok: [testbed-node-0] => { 2026-04-11 06:58:19.277727 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:58:19.277747 | orchestrator | } 2026-04-11 06:58:19.277766 | orchestrator | ok: [testbed-node-1] => { 2026-04-11 06:58:19.277785 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:58:19.277804 | orchestrator | } 2026-04-11 06:58:19.277822 | orchestrator | ok: [testbed-node-2] => { 2026-04-11 06:58:19.277842 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 06:58:19.277862 | orchestrator | } 2026-04-11 06:58:19.277881 | orchestrator | 2026-04-11 06:58:19.277901 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 06:58:19.277949 | orchestrator | Saturday 11 April 2026 06:58:18 +0000 (0:00:02.089) 0:05:34.477 ******** 2026-04-11 06:58:19.277969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:58:19.278004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:58:19.278105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:58:19.278143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:58:23.133325 | orchestrator | skipping: [testbed-node-3] 2026-04-11 06:58:23.133448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:58:23.133468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-11 06:58:23.133506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-11 06:58:23.133519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:58:23.133532 | orchestrator | skipping: [testbed-node-4] 2026-04-11 06:58:23.133544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-11 06:58:23.133555 | orchestrator | skipping: [testbed-node-5] 2026-04-11 06:58:23.133586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:58:23.133605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:58:23.133618 | orchestrator | skipping: [testbed-node-0] 2026-04-11 06:58:23.133638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:58:23.133649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:58:23.133661 | orchestrator | skipping: [testbed-node-2] 2026-04-11 06:58:23.133672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-11 06:58:23.133684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-11 06:58:23.133695 | orchestrator | skipping: [testbed-node-1] 2026-04-11 06:58:23.133707 | orchestrator | 2026-04-11 06:58:23.133719 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:58:23.133731 | orchestrator | Saturday 11 April 2026 06:58:22 +0000 (0:00:03.692) 0:05:38.170 ******** 2026-04-11 06:58:23.133742 | orchestrator | 2026-04-11 06:58:23.133754 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:58:23.133772 | orchestrator | Saturday 11 April 2026 06:58:22 +0000 (0:00:00.526) 0:05:38.697 ******** 2026-04-11 06:58:23.133790 | orchestrator | 2026-04-11 06:58:23.133809 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 06:58:23.133836 | orchestrator | Saturday 11 April 2026 06:58:23 +0000 (0:00:00.547) 0:05:39.245 ******** 2026-04-11 07:00:54.574885 | orchestrator | 2026-04-11 07:00:54.574975 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 07:00:54.574984 | orchestrator | Saturday 11 April 2026 06:58:23 +0000 (0:00:00.734) 0:05:39.980 ******** 2026-04-11 07:00:54.574991 | orchestrator | 2026-04-11 07:00:54.574996 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 07:00:54.575023 | orchestrator | Saturday 11 April 2026 06:58:24 +0000 (0:00:00.537) 0:05:40.517 ******** 2026-04-11 07:00:54.575029 | orchestrator | 2026-04-11 07:00:54.575035 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-11 07:00:54.575069 | orchestrator | Saturday 11 April 2026 06:58:24 +0000 (0:00:00.520) 0:05:41.038 ******** 2026-04-11 07:00:54.575075 | orchestrator | 2026-04-11 07:00:54.575080 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-11 07:00:54.575086 | orchestrator | 2026-04-11 07:00:54.575091 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-11 07:00:54.575096 | orchestrator | Saturday 11 April 2026 06:58:26 +0000 (0:00:01.990) 0:05:43.028 ******** 2026-04-11 07:00:54.575102 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:00:54.575109 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:00:54.575114 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:00:54.575119 | orchestrator | 2026-04-11 07:00:54.575125 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-11 07:00:54.575130 | orchestrator | 2026-04-11 07:00:54.575135 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-11 07:00:54.575141 | orchestrator | Saturday 11 April 2026 06:58:28 +0000 (0:00:01.692) 0:05:44.721 ******** 2026-04-11 07:00:54.575146 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:00:54.575151 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:00:54.575157 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:00:54.575162 | orchestrator | 2026-04-11 07:00:54.575167 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-11 07:00:54.575173 | orchestrator | 2026-04-11 07:00:54.575178 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-11 07:00:54.575183 | orchestrator | Saturday 11 April 2026 06:58:31 +0000 (0:00:02.634) 0:05:47.355 ******** 2026-04-11 07:00:54.575189 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-11 07:00:54.575194 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-11 07:00:54.575199 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-11 07:00:54.575205 | orchestrator | changed: [testbed-node-2] => (item=nova-conductor) 2026-04-11 07:00:54.575211 | orchestrator | changed: [testbed-node-1] => (item=nova-conductor) 2026-04-11 07:00:54.575216 | orchestrator | changed: [testbed-node-0] => (item=nova-conductor) 2026-04-11 07:00:54.575222 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-11 07:00:54.575227 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-11 07:00:54.575233 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-11 07:00:54.575238 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-11 07:00:54.575243 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-11 07:00:54.575248 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-11 07:00:54.575254 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-11 07:00:54.575259 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-11 07:00:54.575265 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-11 07:00:54.575270 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-11 07:00:54.575275 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-11 07:00:54.575281 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-11 07:00:54.575286 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-11 07:00:54.575291 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-11 07:00:54.575296 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-11 07:00:54.575302 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-11 07:00:54.575307 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-11 07:00:54.575312 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-11 07:00:54.575317 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-11 07:00:54.575327 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-11 07:00:54.575332 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-11 07:00:54.575338 | orchestrator | changed: [testbed-node-1] => (item=nova-novncproxy) 2026-04-11 07:00:54.575343 | orchestrator | changed: [testbed-node-2] => (item=nova-novncproxy) 2026-04-11 07:00:54.575348 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-11 07:00:54.575354 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-11 07:00:54.575359 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-11 07:00:54.575364 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-11 07:00:54.575370 | orchestrator | changed: [testbed-node-0] => (item=nova-novncproxy) 2026-04-11 07:00:54.575375 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-11 07:00:54.575380 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-11 07:00:54.575386 | orchestrator | 2026-04-11 07:00:54.575391 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-11 07:00:54.575397 | orchestrator | 2026-04-11 07:00:54.575402 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-11 07:00:54.575408 | orchestrator | Saturday 11 April 2026 07:00:03 +0000 (0:01:32.030) 0:07:19.385 ******** 2026-04-11 07:00:54.575424 | orchestrator | changed: [testbed-node-0] => (item=nova-scheduler) 2026-04-11 07:00:54.575430 | orchestrator | changed: [testbed-node-1] => (item=nova-scheduler) 2026-04-11 07:00:54.575436 | orchestrator | changed: [testbed-node-2] => (item=nova-scheduler) 2026-04-11 07:00:54.575441 | orchestrator | changed: [testbed-node-0] => (item=nova-api) 2026-04-11 07:00:54.575447 | orchestrator | changed: [testbed-node-1] => (item=nova-api) 2026-04-11 07:00:54.575453 | orchestrator | changed: [testbed-node-2] => (item=nova-api) 2026-04-11 07:00:54.575459 | orchestrator | 2026-04-11 07:00:54.575466 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-11 07:00:54.575472 | orchestrator | 2026-04-11 07:00:54.575481 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-11 07:00:54.575488 | orchestrator | Saturday 11 April 2026 07:00:23 +0000 (0:00:19.980) 0:07:39.366 ******** 2026-04-11 07:00:54.575494 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:00:54.575501 | orchestrator | 2026-04-11 07:00:54.575507 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-11 07:00:54.575513 | orchestrator | 2026-04-11 07:00:54.575519 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-11 07:00:54.575526 | orchestrator | Saturday 11 April 2026 07:00:40 +0000 (0:00:16.858) 0:07:56.225 ******** 2026-04-11 07:00:54.575532 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:00:54.575538 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:00:54.575544 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:00:54.575550 | orchestrator | 2026-04-11 07:00:54.575556 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:00:54.575563 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 07:00:54.575572 | orchestrator | testbed-node-0 : ok=39  changed=8  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-11 07:00:54.575578 | orchestrator | testbed-node-1 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-11 07:00:54.575584 | orchestrator | testbed-node-2 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-11 07:00:54.575589 | orchestrator | testbed-node-3 : ok=43  changed=5  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-11 07:00:54.575599 | orchestrator | testbed-node-4 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-11 07:00:54.575604 | orchestrator | testbed-node-5 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-11 07:00:54.575609 | orchestrator | 2026-04-11 07:00:54.575615 | orchestrator | 2026-04-11 07:00:54.575620 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:00:54.575625 | orchestrator | Saturday 11 April 2026 07:00:54 +0000 (0:00:14.022) 0:08:10.247 ******** 2026-04-11 07:00:54.575631 | orchestrator | =============================================================================== 2026-04-11 07:00:54.575636 | orchestrator | nova-cell : Reload nova cell services to remove RPC version cap -------- 92.03s 2026-04-11 07:00:54.575641 | orchestrator | nova : Reload nova API services to remove RPC version pin -------------- 19.98s 2026-04-11 07:00:54.575647 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.16s 2026-04-11 07:00:54.575652 | orchestrator | nova : Run Nova upgrade checks ----------------------------------------- 17.94s 2026-04-11 07:00:54.575657 | orchestrator | nova : Run Nova API online database migrations ------------------------- 16.86s 2026-04-11 07:00:54.575663 | orchestrator | nova-cell : Run Nova cell online database migrations ------------------- 14.02s 2026-04-11 07:00:54.575668 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 12.70s 2026-04-11 07:00:54.575674 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.98s 2026-04-11 07:00:54.575679 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.52s 2026-04-11 07:00:54.575684 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.04s 2026-04-11 07:00:54.575689 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 6.59s 2026-04-11 07:00:54.575695 | orchestrator | nova-cell : Get container facts ----------------------------------------- 5.45s 2026-04-11 07:00:54.575700 | orchestrator | nova-cell : Copy over ceph.conf ----------------------------------------- 5.12s 2026-04-11 07:00:54.575705 | orchestrator | service-check-containers : nova_cell | Check containers ----------------- 5.00s 2026-04-11 07:00:54.575711 | orchestrator | service-check-containers : nova | Check containers ---------------------- 4.95s 2026-04-11 07:00:54.575716 | orchestrator | nova-cell : Get current Libvirt version --------------------------------- 4.89s 2026-04-11 07:00:54.575721 | orchestrator | nova-cell : Flush handlers ---------------------------------------------- 4.86s 2026-04-11 07:00:54.575726 | orchestrator | nova-cell : Copying over libvirt configuration -------------------------- 4.81s 2026-04-11 07:00:54.575732 | orchestrator | service-cert-copy : nova | Copying over extra CA certificates ----------- 4.50s 2026-04-11 07:00:54.575737 | orchestrator | nova-cell : Copying over config.json files for services ----------------- 4.50s 2026-04-11 07:00:54.764144 | orchestrator | + osism apply -a upgrade horizon 2026-04-11 07:00:56.146136 | orchestrator | 2026-04-11 07:00:56 | INFO  | Prepare task for execution of horizon. 2026-04-11 07:00:56.212549 | orchestrator | 2026-04-11 07:00:56 | INFO  | Task c1ad27eb-14c4-453f-a3ce-44c8341b0136 (horizon) was prepared for execution. 2026-04-11 07:00:56.212649 | orchestrator | 2026-04-11 07:00:56 | INFO  | It takes a moment until task c1ad27eb-14c4-453f-a3ce-44c8341b0136 (horizon) has been started and output is visible here. 2026-04-11 07:01:05.257946 | orchestrator | 2026-04-11 07:01:05.258120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:01:05.258132 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-11 07:01:05.258140 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-11 07:01:05.258152 | orchestrator | 2026-04-11 07:01:05.258158 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:01:05.258183 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-11 07:01:05.258189 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-11 07:01:05.258200 | orchestrator | Saturday 11 April 2026 07:01:00 +0000 (0:00:01.225) 0:00:01.225 ******** 2026-04-11 07:01:05.258206 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:05.258213 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:05.258218 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:05.258224 | orchestrator | 2026-04-11 07:01:05.258230 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:01:05.258236 | orchestrator | Saturday 11 April 2026 07:01:01 +0000 (0:00:00.707) 0:00:01.933 ******** 2026-04-11 07:01:05.258241 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-11 07:01:05.258248 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-11 07:01:05.258253 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-11 07:01:05.258259 | orchestrator | 2026-04-11 07:01:05.258265 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-11 07:01:05.258271 | orchestrator | 2026-04-11 07:01:05.258277 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 07:01:05.258283 | orchestrator | Saturday 11 April 2026 07:01:02 +0000 (0:00:00.782) 0:00:02.715 ******** 2026-04-11 07:01:05.258289 | orchestrator | included: /ansible/roles/horizon/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:01:05.258295 | orchestrator | 2026-04-11 07:01:05.258301 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-11 07:01:05.258307 | orchestrator | Saturday 11 April 2026 07:01:03 +0000 (0:00:01.341) 0:00:04.057 ******** 2026-04-11 07:01:05.258319 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:05.258351 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:05.258367 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:11.920599 | orchestrator | 2026-04-11 07:01:11.920725 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-11 07:01:11.920744 | orchestrator | Saturday 11 April 2026 07:01:05 +0000 (0:00:01.782) 0:00:05.840 ******** 2026-04-11 07:01:11.920761 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:11.920781 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:11.920799 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:11.920817 | orchestrator | 2026-04-11 07:01:11.920837 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 07:01:11.920857 | orchestrator | Saturday 11 April 2026 07:01:05 +0000 (0:00:00.330) 0:00:06.170 ******** 2026-04-11 07:01:11.920876 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 07:01:11.920894 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 07:01:11.920906 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 07:01:11.920917 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 07:01:11.920927 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 07:01:11.920938 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 07:01:11.920949 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-11 07:01:11.920959 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 07:01:11.921005 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 07:01:11.921044 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 07:01:11.921055 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 07:01:11.921066 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 07:01:11.921077 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 07:01:11.921088 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-11 07:01:11.921101 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-11 07:01:11.921114 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-11 07:01:11.921127 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 07:01:11.921140 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-11 07:01:11.921153 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-11 07:01:11.921166 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-11 07:01:11.921178 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 07:01:11.921192 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-11 07:01:11.921204 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-11 07:01:11.921216 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-11 07:01:11.921231 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-11 07:01:11.921272 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-11 07:01:11.921284 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-11 07:01:11.921295 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-11 07:01:11.921306 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-11 07:01:11.921317 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-11 07:01:11.921328 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-11 07:01:11.921339 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-11 07:01:11.921349 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-11 07:01:11.921394 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-11 07:01:11.921408 | orchestrator | 2026-04-11 07:01:11.921421 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:11.921432 | orchestrator | Saturday 11 April 2026 07:01:06 +0000 (0:00:01.321) 0:00:07.491 ******** 2026-04-11 07:01:11.921443 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:11.921454 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:11.921465 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:11.921476 | orchestrator | 2026-04-11 07:01:11.921486 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:11.921497 | orchestrator | Saturday 11 April 2026 07:01:07 +0000 (0:00:00.338) 0:00:07.829 ******** 2026-04-11 07:01:11.921508 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.921520 | orchestrator | 2026-04-11 07:01:11.921532 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:11.921543 | orchestrator | Saturday 11 April 2026 07:01:07 +0000 (0:00:00.140) 0:00:07.970 ******** 2026-04-11 07:01:11.921554 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.921565 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:11.921576 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:11.921587 | orchestrator | 2026-04-11 07:01:11.921597 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:11.921608 | orchestrator | Saturday 11 April 2026 07:01:07 +0000 (0:00:00.338) 0:00:08.309 ******** 2026-04-11 07:01:11.921619 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:11.921630 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:11.921641 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:11.921651 | orchestrator | 2026-04-11 07:01:11.921662 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:11.921673 | orchestrator | Saturday 11 April 2026 07:01:08 +0000 (0:00:00.518) 0:00:08.827 ******** 2026-04-11 07:01:11.921684 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.921695 | orchestrator | 2026-04-11 07:01:11.921706 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:11.921717 | orchestrator | Saturday 11 April 2026 07:01:08 +0000 (0:00:00.150) 0:00:08.978 ******** 2026-04-11 07:01:11.921728 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.921746 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:11.921757 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:11.921768 | orchestrator | 2026-04-11 07:01:11.921779 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:11.921790 | orchestrator | Saturday 11 April 2026 07:01:08 +0000 (0:00:00.301) 0:00:09.279 ******** 2026-04-11 07:01:11.921801 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:11.921812 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:11.921823 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:11.921834 | orchestrator | 2026-04-11 07:01:11.921845 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:11.921856 | orchestrator | Saturday 11 April 2026 07:01:09 +0000 (0:00:00.337) 0:00:09.616 ******** 2026-04-11 07:01:11.921867 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.921878 | orchestrator | 2026-04-11 07:01:11.921889 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:11.921899 | orchestrator | Saturday 11 April 2026 07:01:09 +0000 (0:00:00.126) 0:00:09.743 ******** 2026-04-11 07:01:11.921910 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.921921 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:11.921932 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:11.921943 | orchestrator | 2026-04-11 07:01:11.921953 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:11.921964 | orchestrator | Saturday 11 April 2026 07:01:09 +0000 (0:00:00.550) 0:00:10.294 ******** 2026-04-11 07:01:11.921975 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:11.921986 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:11.921997 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:11.922007 | orchestrator | 2026-04-11 07:01:11.922094 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:11.922105 | orchestrator | Saturday 11 April 2026 07:01:10 +0000 (0:00:00.316) 0:00:10.610 ******** 2026-04-11 07:01:11.922116 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.922127 | orchestrator | 2026-04-11 07:01:11.922138 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:11.922149 | orchestrator | Saturday 11 April 2026 07:01:10 +0000 (0:00:00.138) 0:00:10.748 ******** 2026-04-11 07:01:11.922210 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.922222 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:11.922233 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:11.922244 | orchestrator | 2026-04-11 07:01:11.922255 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:11.922266 | orchestrator | Saturday 11 April 2026 07:01:10 +0000 (0:00:00.316) 0:00:11.065 ******** 2026-04-11 07:01:11.922277 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:11.922288 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:11.922299 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:11.922310 | orchestrator | 2026-04-11 07:01:11.922321 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:11.922343 | orchestrator | Saturday 11 April 2026 07:01:11 +0000 (0:00:00.536) 0:00:11.602 ******** 2026-04-11 07:01:11.922354 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.922365 | orchestrator | 2026-04-11 07:01:11.922376 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:11.922387 | orchestrator | Saturday 11 April 2026 07:01:11 +0000 (0:00:00.146) 0:00:11.748 ******** 2026-04-11 07:01:11.922398 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:11.922409 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:11.922420 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:11.922444 | orchestrator | 2026-04-11 07:01:11.922455 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:11.922466 | orchestrator | Saturday 11 April 2026 07:01:11 +0000 (0:00:00.313) 0:00:12.061 ******** 2026-04-11 07:01:11.922477 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:11.922488 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:11.922508 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:11.922519 | orchestrator | 2026-04-11 07:01:11.922537 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:11.922559 | orchestrator | Saturday 11 April 2026 07:01:11 +0000 (0:00:00.353) 0:00:12.415 ******** 2026-04-11 07:01:26.850291 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.850421 | orchestrator | 2026-04-11 07:01:26.850447 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:26.850466 | orchestrator | Saturday 11 April 2026 07:01:12 +0000 (0:00:00.132) 0:00:12.547 ******** 2026-04-11 07:01:26.850484 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.850501 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:26.850518 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:26.850533 | orchestrator | 2026-04-11 07:01:26.850549 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:26.850568 | orchestrator | Saturday 11 April 2026 07:01:12 +0000 (0:00:00.522) 0:00:13.070 ******** 2026-04-11 07:01:26.850585 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:26.850603 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:26.850619 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:26.850637 | orchestrator | 2026-04-11 07:01:26.850655 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:26.850674 | orchestrator | Saturday 11 April 2026 07:01:12 +0000 (0:00:00.343) 0:00:13.414 ******** 2026-04-11 07:01:26.850692 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.850711 | orchestrator | 2026-04-11 07:01:26.850729 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:26.850748 | orchestrator | Saturday 11 April 2026 07:01:13 +0000 (0:00:00.167) 0:00:13.581 ******** 2026-04-11 07:01:26.850766 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.850783 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:26.850802 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:26.850879 | orchestrator | 2026-04-11 07:01:26.850898 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:26.850916 | orchestrator | Saturday 11 April 2026 07:01:13 +0000 (0:00:00.305) 0:00:13.887 ******** 2026-04-11 07:01:26.850933 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:26.850951 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:26.850967 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:26.850983 | orchestrator | 2026-04-11 07:01:26.851000 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:26.851016 | orchestrator | Saturday 11 April 2026 07:01:13 +0000 (0:00:00.559) 0:00:14.447 ******** 2026-04-11 07:01:26.851062 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.851079 | orchestrator | 2026-04-11 07:01:26.851095 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:26.851110 | orchestrator | Saturday 11 April 2026 07:01:14 +0000 (0:00:00.144) 0:00:14.591 ******** 2026-04-11 07:01:26.851127 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.851143 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:26.851159 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:26.851177 | orchestrator | 2026-04-11 07:01:26.851193 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:26.851210 | orchestrator | Saturday 11 April 2026 07:01:14 +0000 (0:00:00.318) 0:00:14.910 ******** 2026-04-11 07:01:26.851227 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:26.851243 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:26.851260 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:26.851276 | orchestrator | 2026-04-11 07:01:26.851291 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:26.851308 | orchestrator | Saturday 11 April 2026 07:01:14 +0000 (0:00:00.344) 0:00:15.254 ******** 2026-04-11 07:01:26.851325 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.851341 | orchestrator | 2026-04-11 07:01:26.851358 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:26.851403 | orchestrator | Saturday 11 April 2026 07:01:14 +0000 (0:00:00.150) 0:00:15.405 ******** 2026-04-11 07:01:26.851420 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.851436 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:26.851452 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:26.851469 | orchestrator | 2026-04-11 07:01:26.851486 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-11 07:01:26.851503 | orchestrator | Saturday 11 April 2026 07:01:15 +0000 (0:00:00.519) 0:00:15.925 ******** 2026-04-11 07:01:26.851518 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:01:26.851534 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:01:26.851551 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:01:26.851567 | orchestrator | 2026-04-11 07:01:26.851584 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-11 07:01:26.851597 | orchestrator | Saturday 11 April 2026 07:01:15 +0000 (0:00:00.337) 0:00:16.262 ******** 2026-04-11 07:01:26.851610 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.851623 | orchestrator | 2026-04-11 07:01:26.851637 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-11 07:01:26.851649 | orchestrator | Saturday 11 April 2026 07:01:15 +0000 (0:00:00.132) 0:00:16.395 ******** 2026-04-11 07:01:26.851661 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.851672 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:26.851684 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:26.851697 | orchestrator | 2026-04-11 07:01:26.851709 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-11 07:01:26.851720 | orchestrator | Saturday 11 April 2026 07:01:16 +0000 (0:00:00.328) 0:00:16.724 ******** 2026-04-11 07:01:26.851732 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:01:26.851743 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:01:26.851756 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:01:26.851768 | orchestrator | 2026-04-11 07:01:26.851782 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-11 07:01:26.851795 | orchestrator | Saturday 11 April 2026 07:01:18 +0000 (0:00:01.930) 0:00:18.654 ******** 2026-04-11 07:01:26.851807 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 07:01:26.851821 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 07:01:26.851844 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-11 07:01:26.851859 | orchestrator | 2026-04-11 07:01:26.851872 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-11 07:01:26.851907 | orchestrator | Saturday 11 April 2026 07:01:20 +0000 (0:00:01.901) 0:00:20.556 ******** 2026-04-11 07:01:26.851921 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 07:01:26.851935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 07:01:26.851948 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-11 07:01:26.851959 | orchestrator | 2026-04-11 07:01:26.851972 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-11 07:01:26.851984 | orchestrator | Saturday 11 April 2026 07:01:21 +0000 (0:00:01.893) 0:00:22.449 ******** 2026-04-11 07:01:26.851997 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 07:01:26.852009 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 07:01:26.852048 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-11 07:01:26.852062 | orchestrator | 2026-04-11 07:01:26.852076 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-11 07:01:26.852088 | orchestrator | Saturday 11 April 2026 07:01:23 +0000 (0:00:01.524) 0:00:23.973 ******** 2026-04-11 07:01:26.852114 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.852128 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:26.852142 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:26.852154 | orchestrator | 2026-04-11 07:01:26.852165 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-11 07:01:26.852177 | orchestrator | Saturday 11 April 2026 07:01:23 +0000 (0:00:00.298) 0:00:24.271 ******** 2026-04-11 07:01:26.852188 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:26.852201 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:26.852214 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:26.852227 | orchestrator | 2026-04-11 07:01:26.852240 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 07:01:26.852252 | orchestrator | Saturday 11 April 2026 07:01:24 +0000 (0:00:00.519) 0:00:24.791 ******** 2026-04-11 07:01:26.852263 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:01:26.852276 | orchestrator | 2026-04-11 07:01:26.852288 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-11 07:01:26.852301 | orchestrator | Saturday 11 April 2026 07:01:25 +0000 (0:00:01.007) 0:00:25.799 ******** 2026-04-11 07:01:26.852329 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:26.852365 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:27.652391 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:27.652497 | orchestrator | 2026-04-11 07:01:27.652535 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-11 07:01:27.652549 | orchestrator | Saturday 11 April 2026 07:01:27 +0000 (0:00:01.728) 0:00:27.527 ******** 2026-04-11 07:01:27.652583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:01:27.652597 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:27.652618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:01:27.652637 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:27.652657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:01:30.774963 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:30.775085 | orchestrator | 2026-04-11 07:01:30.775098 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-11 07:01:30.775107 | orchestrator | Saturday 11 April 2026 07:01:27 +0000 (0:00:00.715) 0:00:28.242 ******** 2026-04-11 07:01:30.775132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:01:30.775162 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:30.775187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:01:30.775197 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:01:30.775209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:01:30.775226 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:01:30.775234 | orchestrator | 2026-04-11 07:01:30.775242 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-11 07:01:30.775249 | orchestrator | Saturday 11 April 2026 07:01:29 +0000 (0:00:01.341) 0:00:29.583 ******** 2026-04-11 07:01:30.775268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:31.817384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:31.817544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-11 07:01:31.817592 | orchestrator | 2026-04-11 07:01:31.817615 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-11 07:01:31.817634 | orchestrator | Saturday 11 April 2026 07:01:30 +0000 (0:00:01.867) 0:00:31.450 ******** 2026-04-11 07:01:31.817653 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:01:31.817672 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:01:31.817690 | orchestrator | } 2026-04-11 07:01:31.817706 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:01:31.817722 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:01:31.817739 | orchestrator | } 2026-04-11 07:01:31.817756 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:01:31.817774 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:01:31.817791 | orchestrator | } 2026-04-11 07:01:31.817809 | orchestrator | 2026-04-11 07:01:31.817827 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:01:31.817844 | orchestrator | Saturday 11 April 2026 07:01:31 +0000 (0:00:00.370) 0:00:31.821 ******** 2026-04-11 07:01:31.817863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:01:31.817898 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:01:31.817947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:02:37.272767 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:02:37.272900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-11 07:02:37.272938 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:02:37.272950 | orchestrator | 2026-04-11 07:02:37.272961 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 07:02:37.272972 | orchestrator | Saturday 11 April 2026 07:01:32 +0000 (0:00:01.459) 0:00:33.281 ******** 2026-04-11 07:02:37.272982 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:02:37.272991 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:02:37.273001 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:02:37.273010 | orchestrator | 2026-04-11 07:02:37.273020 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-11 07:02:37.273030 | orchestrator | Saturday 11 April 2026 07:01:33 +0000 (0:00:00.325) 0:00:33.606 ******** 2026-04-11 07:02:37.273040 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:02:37.273050 | orchestrator | 2026-04-11 07:02:37.273109 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-11 07:02:37.273120 | orchestrator | Saturday 11 April 2026 07:01:34 +0000 (0:00:00.908) 0:00:34.514 ******** 2026-04-11 07:02:37.273130 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:02:37.273140 | orchestrator | 2026-04-11 07:02:37.273150 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-11 07:02:37.273159 | orchestrator | Saturday 11 April 2026 07:02:06 +0000 (0:00:32.678) 0:01:07.193 ******** 2026-04-11 07:02:37.273169 | orchestrator | 2026-04-11 07:02:37.273178 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-11 07:02:37.273188 | orchestrator | Saturday 11 April 2026 07:02:06 +0000 (0:00:00.273) 0:01:07.467 ******** 2026-04-11 07:02:37.273198 | orchestrator | 2026-04-11 07:02:37.273207 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-11 07:02:37.273217 | orchestrator | Saturday 11 April 2026 07:02:07 +0000 (0:00:00.076) 0:01:07.544 ******** 2026-04-11 07:02:37.273226 | orchestrator | 2026-04-11 07:02:37.273236 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-11 07:02:37.273245 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-11 07:02:37.273255 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-11 07:02:37.273274 | orchestrator | Saturday 11 April 2026 07:02:07 +0000 (0:00:00.076) 0:01:07.621 ******** 2026-04-11 07:02:37.273284 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:02:37.273294 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:02:37.273304 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:02:37.273315 | orchestrator | 2026-04-11 07:02:37.273343 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:02:37.273356 | orchestrator | testbed-node-0 : ok=36  changed=6  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-11 07:02:37.273375 | orchestrator | testbed-node-1 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-11 07:02:37.273418 | orchestrator | testbed-node-2 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-11 07:02:37.273435 | orchestrator | 2026-04-11 07:02:37.273451 | orchestrator | 2026-04-11 07:02:37.273468 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:02:37.273484 | orchestrator | Saturday 11 April 2026 07:02:36 +0000 (0:00:29.765) 0:01:37.386 ******** 2026-04-11 07:02:37.273501 | orchestrator | =============================================================================== 2026-04-11 07:02:37.273520 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 32.68s 2026-04-11 07:02:37.273538 | orchestrator | horizon : Restart horizon container ------------------------------------ 29.77s 2026-04-11 07:02:37.273555 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.93s 2026-04-11 07:02:37.273568 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.90s 2026-04-11 07:02:37.273581 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.89s 2026-04-11 07:02:37.273592 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.87s 2026-04-11 07:02:37.273604 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.78s 2026-04-11 07:02:37.273615 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.73s 2026-04-11 07:02:37.273626 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.52s 2026-04-11 07:02:37.273637 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.46s 2026-04-11 07:02:37.273648 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.34s 2026-04-11 07:02:37.273660 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.34s 2026-04-11 07:02:37.273671 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.32s 2026-04-11 07:02:37.273681 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.01s 2026-04-11 07:02:37.273691 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.91s 2026-04-11 07:02:37.273700 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2026-04-11 07:02:37.273710 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-04-11 07:02:37.273730 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2026-04-11 07:02:37.273747 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-04-11 07:02:37.273757 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2026-04-11 07:02:37.460912 | orchestrator | + osism apply -a upgrade skyline 2026-04-11 07:02:38.835233 | orchestrator | 2026-04-11 07:02:38 | INFO  | Prepare task for execution of skyline. 2026-04-11 07:02:38.904366 | orchestrator | 2026-04-11 07:02:38 | INFO  | Task c2137dc2-a863-4c84-bead-b69fa5f21241 (skyline) was prepared for execution. 2026-04-11 07:02:38.904467 | orchestrator | 2026-04-11 07:02:38 | INFO  | It takes a moment until task c2137dc2-a863-4c84-bead-b69fa5f21241 (skyline) has been started and output is visible here. 2026-04-11 07:02:49.170814 | orchestrator | 2026-04-11 07:02:49.170925 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:02:49.170941 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-11 07:02:49.170954 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-11 07:02:49.170976 | orchestrator | 2026-04-11 07:02:49.170987 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:02:49.170998 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-11 07:02:49.171036 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-11 07:02:49.171058 | orchestrator | Saturday 11 April 2026 07:02:43 +0000 (0:00:01.260) 0:00:01.260 ******** 2026-04-11 07:02:49.171125 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:02:49.171139 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:02:49.171150 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:02:49.171161 | orchestrator | 2026-04-11 07:02:49.171172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:02:49.171183 | orchestrator | Saturday 11 April 2026 07:02:44 +0000 (0:00:01.081) 0:00:02.342 ******** 2026-04-11 07:02:49.171194 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-11 07:02:49.171206 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-11 07:02:49.171217 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-11 07:02:49.171229 | orchestrator | 2026-04-11 07:02:49.171240 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-11 07:02:49.171251 | orchestrator | 2026-04-11 07:02:49.171262 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-11 07:02:49.171272 | orchestrator | Saturday 11 April 2026 07:02:45 +0000 (0:00:00.937) 0:00:03.280 ******** 2026-04-11 07:02:49.171283 | orchestrator | included: /ansible/roles/skyline/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:02:49.171296 | orchestrator | 2026-04-11 07:02:49.171307 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-11 07:02:49.171317 | orchestrator | Saturday 11 April 2026 07:02:46 +0000 (0:00:01.143) 0:00:04.423 ******** 2026-04-11 07:02:49.171334 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:49.171368 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:49.171405 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:49.171431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:02:49.171446 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:02:49.171466 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:02:49.171488 | orchestrator | 2026-04-11 07:02:49.171501 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-11 07:02:49.171514 | orchestrator | Saturday 11 April 2026 07:02:48 +0000 (0:00:01.951) 0:00:06.374 ******** 2026-04-11 07:02:49.171534 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:02:52.323470 | orchestrator | 2026-04-11 07:02:52.323571 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-11 07:02:52.323586 | orchestrator | Saturday 11 April 2026 07:02:49 +0000 (0:00:01.125) 0:00:07.500 ******** 2026-04-11 07:02:52.323602 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:52.323618 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:52.323630 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:52.323676 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:02:52.323710 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:02:52.323723 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:02:52.323734 | orchestrator | 2026-04-11 07:02:52.323744 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-11 07:02:52.323754 | orchestrator | Saturday 11 April 2026 07:02:51 +0000 (0:00:02.218) 0:00:09.718 ******** 2026-04-11 07:02:52.323770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:02:52.323795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:02:53.168237 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:02:53.168336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:02:53.168355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:02:53.168368 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:02:53.168394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:02:53.168456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:02:53.168477 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:02:53.168494 | orchestrator | 2026-04-11 07:02:53.168511 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-11 07:02:53.168528 | orchestrator | Saturday 11 April 2026 07:02:52 +0000 (0:00:00.671) 0:00:10.390 ******** 2026-04-11 07:02:53.168546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:02:53.168566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:02:53.168596 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:02:53.168623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:02:53.168649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:02:56.268896 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:02:56.269008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:02:56.269027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:02:56.269103 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:02:56.269128 | orchestrator | 2026-04-11 07:02:56.269149 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-11 07:02:56.269169 | orchestrator | Saturday 11 April 2026 07:02:53 +0000 (0:00:01.082) 0:00:11.473 ******** 2026-04-11 07:02:56.269196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:56.269231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:56.269245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:02:56.269263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:02:56.269285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:02:56.269306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:03:04.806119 | orchestrator | 2026-04-11 07:03:04.806231 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-11 07:03:04.806248 | orchestrator | Saturday 11 April 2026 07:02:56 +0000 (0:00:02.634) 0:00:14.107 ******** 2026-04-11 07:03:04.806260 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-11 07:03:04.806272 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-11 07:03:04.806283 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-11 07:03:04.806294 | orchestrator | 2026-04-11 07:03:04.806305 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-11 07:03:04.806316 | orchestrator | Saturday 11 April 2026 07:02:58 +0000 (0:00:01.699) 0:00:15.807 ******** 2026-04-11 07:03:04.806327 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-11 07:03:04.806338 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-11 07:03:04.806375 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-11 07:03:04.806387 | orchestrator | 2026-04-11 07:03:04.806398 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-11 07:03:04.806409 | orchestrator | Saturday 11 April 2026 07:03:00 +0000 (0:00:02.000) 0:00:17.808 ******** 2026-04-11 07:03:04.806437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:03:04.806454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:03:04.806487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:03:04.806502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:03:04.806527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:03:04.806541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:03:04.806553 | orchestrator | 2026-04-11 07:03:04.806565 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-11 07:03:04.806578 | orchestrator | Saturday 11 April 2026 07:03:02 +0000 (0:00:02.785) 0:00:20.594 ******** 2026-04-11 07:03:04.806592 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:03:04.806606 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:03:04.806618 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:03:04.806631 | orchestrator | 2026-04-11 07:03:04.806644 | orchestrator | TASK [service-check-containers : skyline | Check containers] ******************* 2026-04-11 07:03:04.806657 | orchestrator | Saturday 11 April 2026 07:03:03 +0000 (0:00:00.710) 0:00:21.304 ******** 2026-04-11 07:03:04.806682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:03:06.882706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:03:06.882851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-11 07:03:06.882877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:03:06.882921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:03:06.882969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-11 07:03:06.882989 | orchestrator | 2026-04-11 07:03:06.883006 | orchestrator | TASK [service-check-containers : skyline | Notify handlers to restart containers] *** 2026-04-11 07:03:06.883024 | orchestrator | Saturday 11 April 2026 07:03:05 +0000 (0:00:02.319) 0:00:23.623 ******** 2026-04-11 07:03:06.883041 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:03:06.883064 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:03:06.883111 | orchestrator | } 2026-04-11 07:03:06.883128 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:03:06.883144 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:03:06.883160 | orchestrator | } 2026-04-11 07:03:06.883176 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:03:06.883192 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:03:06.883209 | orchestrator | } 2026-04-11 07:03:06.883227 | orchestrator | 2026-04-11 07:03:06.883244 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:03:06.883261 | orchestrator | Saturday 11 April 2026 07:03:06 +0000 (0:00:00.523) 0:00:24.147 ******** 2026-04-11 07:03:06.883280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:03:06.883301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:03:06.883332 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:03:06.883367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:03:43.459990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:03:43.460140 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:03:43.460162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-11 07:03:43.460198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-11 07:03:43.460211 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:03:43.460222 | orchestrator | 2026-04-11 07:03:43.460235 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-11 07:03:43.460255 | orchestrator | Saturday 11 April 2026 07:03:07 +0000 (0:00:01.311) 0:00:25.458 ******** 2026-04-11 07:03:43.460273 | orchestrator | 2026-04-11 07:03:43.460291 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-11 07:03:43.460309 | orchestrator | Saturday 11 April 2026 07:03:07 +0000 (0:00:00.082) 0:00:25.541 ******** 2026-04-11 07:03:43.460327 | orchestrator | 2026-04-11 07:03:43.460345 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-11 07:03:43.460364 | orchestrator | Saturday 11 April 2026 07:03:07 +0000 (0:00:00.100) 0:00:25.641 ******** 2026-04-11 07:03:43.460382 | orchestrator | 2026-04-11 07:03:43.460402 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-11 07:03:43.460416 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-11 07:03:43.460427 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-11 07:03:43.460467 | orchestrator | Saturday 11 April 2026 07:03:07 +0000 (0:00:00.072) 0:00:25.714 ******** 2026-04-11 07:03:43.460481 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:03:43.460495 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:03:43.460507 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:03:43.460520 | orchestrator | 2026-04-11 07:03:43.460533 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-11 07:03:43.460545 | orchestrator | Saturday 11 April 2026 07:03:26 +0000 (0:00:18.389) 0:00:44.104 ******** 2026-04-11 07:03:43.460558 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:03:43.460571 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:03:43.460584 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:03:43.460597 | orchestrator | 2026-04-11 07:03:43.460609 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:03:43.460629 | orchestrator | testbed-node-0 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 07:03:43.460641 | orchestrator | testbed-node-1 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 07:03:43.460652 | orchestrator | testbed-node-2 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 07:03:43.460663 | orchestrator | 2026-04-11 07:03:43.460674 | orchestrator | 2026-04-11 07:03:43.460684 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:03:43.460705 | orchestrator | Saturday 11 April 2026 07:03:43 +0000 (0:00:16.746) 0:01:00.850 ******** 2026-04-11 07:03:43.460716 | orchestrator | =============================================================================== 2026-04-11 07:03:43.460727 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 18.39s 2026-04-11 07:03:43.460738 | orchestrator | skyline : Restart skyline-console container ---------------------------- 16.75s 2026-04-11 07:03:43.460748 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.79s 2026-04-11 07:03:43.460759 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.63s 2026-04-11 07:03:43.460770 | orchestrator | service-check-containers : skyline | Check containers ------------------- 2.32s 2026-04-11 07:03:43.460780 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.22s 2026-04-11 07:03:43.460791 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.00s 2026-04-11 07:03:43.460801 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.95s 2026-04-11 07:03:43.460812 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.70s 2026-04-11 07:03:43.460822 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.31s 2026-04-11 07:03:43.460833 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.14s 2026-04-11 07:03:43.460844 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.13s 2026-04-11 07:03:43.460855 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.08s 2026-04-11 07:03:43.460866 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.08s 2026-04-11 07:03:43.460877 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-04-11 07:03:43.460887 | orchestrator | skyline : Copying over custom logos ------------------------------------- 0.71s 2026-04-11 07:03:43.460898 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS certificate --- 0.67s 2026-04-11 07:03:43.460909 | orchestrator | service-check-containers : skyline | Notify handlers to restart containers --- 0.52s 2026-04-11 07:03:43.460920 | orchestrator | skyline : Flush handlers ------------------------------------------------ 0.26s 2026-04-11 07:03:43.634344 | orchestrator | + osism apply -a upgrade glance 2026-04-11 07:03:44.983200 | orchestrator | 2026-04-11 07:03:44 | INFO  | Prepare task for execution of glance. 2026-04-11 07:03:45.057344 | orchestrator | 2026-04-11 07:03:45 | INFO  | Task ad755059-6153-40ce-8806-d03b0494b633 (glance) was prepared for execution. 2026-04-11 07:03:45.057426 | orchestrator | 2026-04-11 07:03:45 | INFO  | It takes a moment until task ad755059-6153-40ce-8806-d03b0494b633 (glance) has been started and output is visible here. 2026-04-11 07:04:10.273990 | orchestrator | 2026-04-11 07:04:10.274195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:04:10.274215 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-11 07:04:10.274230 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-11 07:04:10.274253 | orchestrator | 2026-04-11 07:04:10.274264 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:04:10.274275 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-11 07:04:10.274286 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-11 07:04:10.274308 | orchestrator | Saturday 11 April 2026 07:03:49 +0000 (0:00:01.112) 0:00:01.112 ******** 2026-04-11 07:04:10.274319 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:04:10.274331 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:04:10.274342 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:04:10.274380 | orchestrator | 2026-04-11 07:04:10.274392 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:04:10.274403 | orchestrator | Saturday 11 April 2026 07:03:50 +0000 (0:00:00.914) 0:00:02.026 ******** 2026-04-11 07:04:10.274414 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-11 07:04:10.274425 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-11 07:04:10.274436 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-11 07:04:10.274447 | orchestrator | 2026-04-11 07:04:10.274459 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-11 07:04:10.274470 | orchestrator | 2026-04-11 07:04:10.274481 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:04:10.274492 | orchestrator | Saturday 11 April 2026 07:03:51 +0000 (0:00:00.741) 0:00:02.768 ******** 2026-04-11 07:04:10.274517 | orchestrator | included: /ansible/roles/glance/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:04:10.274531 | orchestrator | 2026-04-11 07:04:10.274545 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:04:10.274557 | orchestrator | Saturday 11 April 2026 07:03:52 +0000 (0:00:01.053) 0:00:03.823 ******** 2026-04-11 07:04:10.274570 | orchestrator | included: /ansible/roles/glance/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:04:10.274583 | orchestrator | 2026-04-11 07:04:10.274598 | orchestrator | TASK [glance : Start Glance upgrade] ******************************************* 2026-04-11 07:04:10.274618 | orchestrator | Saturday 11 April 2026 07:03:53 +0000 (0:00:00.944) 0:00:04.767 ******** 2026-04-11 07:04:10.274637 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:04:10.274655 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:04:10.274673 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:04:10.274694 | orchestrator | 2026-04-11 07:04:10.274714 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:04:10.274734 | orchestrator | Saturday 11 April 2026 07:03:53 +0000 (0:00:00.469) 0:00:05.236 ******** 2026-04-11 07:04:10.274748 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:04:10.274761 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:04:10.274774 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-0 2026-04-11 07:04:10.274785 | orchestrator | 2026-04-11 07:04:10.274796 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-11 07:04:10.274806 | orchestrator | Saturday 11 April 2026 07:03:54 +0000 (0:00:00.894) 0:00:06.131 ******** 2026-04-11 07:04:10.274844 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:04:10.274871 | orchestrator | 2026-04-11 07:04:10.274883 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:04:10.274894 | orchestrator | Saturday 11 April 2026 07:03:58 +0000 (0:00:03.826) 0:00:09.958 ******** 2026-04-11 07:04:10.274904 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0 2026-04-11 07:04:10.274915 | orchestrator | 2026-04-11 07:04:10.274926 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-11 07:04:10.274937 | orchestrator | Saturday 11 April 2026 07:03:58 +0000 (0:00:00.596) 0:00:10.555 ******** 2026-04-11 07:04:10.274948 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:04:10.274958 | orchestrator | 2026-04-11 07:04:10.274969 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-11 07:04:10.274980 | orchestrator | Saturday 11 April 2026 07:04:02 +0000 (0:00:03.570) 0:00:14.126 ******** 2026-04-11 07:04:10.274991 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-11 07:04:10.275003 | orchestrator | 2026-04-11 07:04:10.275014 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-11 07:04:10.275025 | orchestrator | Saturday 11 April 2026 07:04:04 +0000 (0:00:01.539) 0:00:15.665 ******** 2026-04-11 07:04:10.275036 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-11 07:04:10.275047 | orchestrator | 2026-04-11 07:04:10.275057 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-11 07:04:10.275068 | orchestrator | Saturday 11 April 2026 07:04:04 +0000 (0:00:00.935) 0:00:16.601 ******** 2026-04-11 07:04:10.275079 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:04:10.275096 | orchestrator | 2026-04-11 07:04:10.275107 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-11 07:04:10.275146 | orchestrator | Saturday 11 April 2026 07:04:05 +0000 (0:00:00.638) 0:00:17.239 ******** 2026-04-11 07:04:10.275157 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:04:10.275168 | orchestrator | 2026-04-11 07:04:10.275179 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-11 07:04:10.275190 | orchestrator | Saturday 11 April 2026 07:04:05 +0000 (0:00:00.145) 0:00:17.385 ******** 2026-04-11 07:04:10.275201 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:04:10.275212 | orchestrator | 2026-04-11 07:04:10.275223 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:04:10.275234 | orchestrator | Saturday 11 April 2026 07:04:05 +0000 (0:00:00.137) 0:00:17.523 ******** 2026-04-11 07:04:10.275245 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0 2026-04-11 07:04:10.275255 | orchestrator | 2026-04-11 07:04:10.275266 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-11 07:04:10.275277 | orchestrator | Saturday 11 April 2026 07:04:06 +0000 (0:00:00.610) 0:00:18.134 ******** 2026-04-11 07:04:10.275290 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:04:10.275309 | orchestrator | 2026-04-11 07:04:10.275321 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-11 07:04:10.275339 | orchestrator | Saturday 11 April 2026 07:04:10 +0000 (0:00:03.748) 0:00:21.882 ******** 2026-04-11 07:05:02.101559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:05:02.101682 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.101699 | orchestrator | 2026-04-11 07:05:02.101712 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-11 07:05:02.101724 | orchestrator | Saturday 11 April 2026 07:04:13 +0000 (0:00:02.931) 0:00:24.814 ******** 2026-04-11 07:05:02.101737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:05:02.101772 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.101784 | orchestrator | 2026-04-11 07:05:02.101795 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-11 07:05:02.101806 | orchestrator | Saturday 11 April 2026 07:04:16 +0000 (0:00:03.201) 0:00:28.016 ******** 2026-04-11 07:05:02.101817 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.101828 | orchestrator | 2026-04-11 07:05:02.101838 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-11 07:05:02.101865 | orchestrator | Saturday 11 April 2026 07:04:19 +0000 (0:00:03.369) 0:00:31.385 ******** 2026-04-11 07:05:02.101884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:05:02.101897 | orchestrator | 2026-04-11 07:05:02.101908 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-11 07:05:02.101918 | orchestrator | Saturday 11 April 2026 07:04:23 +0000 (0:00:04.032) 0:00:35.418 ******** 2026-04-11 07:05:02.101929 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:05:02.101940 | orchestrator | 2026-04-11 07:05:02.101952 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-11 07:05:02.101963 | orchestrator | Saturday 11 April 2026 07:04:29 +0000 (0:00:05.770) 0:00:41.188 ******** 2026-04-11 07:05:02.101982 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.101993 | orchestrator | 2026-04-11 07:05:02.102003 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-11 07:05:02.102074 | orchestrator | Saturday 11 April 2026 07:04:32 +0000 (0:00:03.263) 0:00:44.452 ******** 2026-04-11 07:05:02.102091 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.102105 | orchestrator | 2026-04-11 07:05:02.102119 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-11 07:05:02.102133 | orchestrator | Saturday 11 April 2026 07:04:35 +0000 (0:00:03.136) 0:00:47.589 ******** 2026-04-11 07:05:02.102173 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.102187 | orchestrator | 2026-04-11 07:05:02.102200 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-11 07:05:02.102213 | orchestrator | Saturday 11 April 2026 07:04:39 +0000 (0:00:03.115) 0:00:50.704 ******** 2026-04-11 07:05:02.102225 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.102238 | orchestrator | 2026-04-11 07:05:02.102251 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-11 07:05:02.102264 | orchestrator | Saturday 11 April 2026 07:04:39 +0000 (0:00:00.139) 0:00:50.844 ******** 2026-04-11 07:05:02.102278 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-11 07:05:02.102293 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.102306 | orchestrator | 2026-04-11 07:05:02.102318 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-11 07:05:02.102331 | orchestrator | Saturday 11 April 2026 07:04:42 +0000 (0:00:03.232) 0:00:54.076 ******** 2026-04-11 07:05:02.102344 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.102357 | orchestrator | 2026-04-11 07:05:02.102370 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-11 07:05:02.102384 | orchestrator | Saturday 11 April 2026 07:04:45 +0000 (0:00:03.316) 0:00:57.393 ******** 2026-04-11 07:05:02.102396 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:02.102410 | orchestrator | 2026-04-11 07:05:02.102424 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:05:02.102435 | orchestrator | Saturday 11 April 2026 07:04:49 +0000 (0:00:03.282) 0:01:00.676 ******** 2026-04-11 07:05:02.102446 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:05:02.102457 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:05:02.102468 | orchestrator | included: /ansible/roles/glance/tasks/stop_service.yml for testbed-node-0 2026-04-11 07:05:02.102479 | orchestrator | 2026-04-11 07:05:02.102490 | orchestrator | TASK [glance : Stop glance service] ******************************************** 2026-04-11 07:05:02.102501 | orchestrator | Saturday 11 April 2026 07:04:50 +0000 (0:00:00.951) 0:01:01.627 ******** 2026-04-11 07:05:02.102512 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:05:02.102523 | orchestrator | 2026-04-11 07:05:02.102534 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-11 07:05:02.102552 | orchestrator | Saturday 11 April 2026 07:05:02 +0000 (0:00:12.081) 0:01:13.709 ******** 2026-04-11 07:05:59.517442 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:05:59.517562 | orchestrator | 2026-04-11 07:05:59.517578 | orchestrator | TASK [glance : Running Glance database expand container] *********************** 2026-04-11 07:05:59.517591 | orchestrator | Saturday 11 April 2026 07:05:04 +0000 (0:00:02.182) 0:01:15.892 ******** 2026-04-11 07:05:59.517602 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:05:59.517613 | orchestrator | 2026-04-11 07:05:59.517625 | orchestrator | TASK [glance : Running Glance database migrate container] ********************** 2026-04-11 07:05:59.517636 | orchestrator | Saturday 11 April 2026 07:05:27 +0000 (0:00:23.342) 0:01:39.234 ******** 2026-04-11 07:05:59.517647 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:05:59.517658 | orchestrator | 2026-04-11 07:05:59.517669 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:05:59.517680 | orchestrator | Saturday 11 April 2026 07:05:42 +0000 (0:00:15.109) 0:01:54.344 ******** 2026-04-11 07:05:59.517714 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:05:59.517726 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-1, testbed-node-2 2026-04-11 07:05:59.517738 | orchestrator | 2026-04-11 07:05:59.517749 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-11 07:05:59.517760 | orchestrator | Saturday 11 April 2026 07:05:43 +0000 (0:00:00.552) 0:01:54.896 ******** 2026-04-11 07:05:59.517791 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:05:59.517829 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:05:59.517843 | orchestrator | 2026-04-11 07:05:59.517855 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:05:59.517874 | orchestrator | Saturday 11 April 2026 07:05:47 +0000 (0:00:04.198) 0:01:59.095 ******** 2026-04-11 07:05:59.517886 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-1, testbed-node-2 2026-04-11 07:05:59.517898 | orchestrator | 2026-04-11 07:05:59.517909 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-11 07:05:59.517920 | orchestrator | Saturday 11 April 2026 07:05:47 +0000 (0:00:00.383) 0:01:59.479 ******** 2026-04-11 07:05:59.517931 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:05:59.517942 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:05:59.517953 | orchestrator | 2026-04-11 07:05:59.517964 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-11 07:05:59.517975 | orchestrator | Saturday 11 April 2026 07:05:51 +0000 (0:00:03.750) 0:02:03.230 ******** 2026-04-11 07:05:59.517986 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-11 07:05:59.517999 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-11 07:05:59.518010 | orchestrator | 2026-04-11 07:05:59.518090 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-11 07:05:59.518103 | orchestrator | Saturday 11 April 2026 07:05:52 +0000 (0:00:01.293) 0:02:04.524 ******** 2026-04-11 07:05:59.518114 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-11 07:05:59.518125 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-11 07:05:59.518136 | orchestrator | 2026-04-11 07:05:59.518147 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-11 07:05:59.518158 | orchestrator | Saturday 11 April 2026 07:05:54 +0000 (0:00:01.110) 0:02:05.634 ******** 2026-04-11 07:05:59.518198 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:05:59.518218 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:05:59.518236 | orchestrator | 2026-04-11 07:05:59.518254 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-11 07:05:59.518271 | orchestrator | Saturday 11 April 2026 07:05:54 +0000 (0:00:00.821) 0:02:06.455 ******** 2026-04-11 07:05:59.518288 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:05:59.518305 | orchestrator | 2026-04-11 07:05:59.518322 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-11 07:05:59.518339 | orchestrator | Saturday 11 April 2026 07:05:54 +0000 (0:00:00.149) 0:02:06.604 ******** 2026-04-11 07:05:59.518356 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:05:59.518376 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:05:59.518393 | orchestrator | 2026-04-11 07:05:59.518413 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:05:59.518427 | orchestrator | Saturday 11 April 2026 07:05:55 +0000 (0:00:00.247) 0:02:06.852 ******** 2026-04-11 07:05:59.518438 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-1, testbed-node-2 2026-04-11 07:05:59.518449 | orchestrator | 2026-04-11 07:05:59.518459 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-11 07:05:59.518470 | orchestrator | Saturday 11 April 2026 07:05:55 +0000 (0:00:00.407) 0:02:07.260 ******** 2026-04-11 07:05:59.518497 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:06:05.959081 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:06:05.959245 | orchestrator | 2026-04-11 07:06:05.959265 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-11 07:06:05.959279 | orchestrator | Saturday 11 April 2026 07:05:59 +0000 (0:00:04.090) 0:02:11.351 ******** 2026-04-11 07:06:05.959294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:06:05.959347 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:05.959406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:06:05.959424 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:05.959436 | orchestrator | 2026-04-11 07:06:05.959447 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-11 07:06:05.959459 | orchestrator | Saturday 11 April 2026 07:06:02 +0000 (0:00:03.240) 0:02:14.591 ******** 2026-04-11 07:06:05.959470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:06:05.959509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:06:45.683934 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.684095 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.684125 | orchestrator | 2026-04-11 07:06:45.684146 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-11 07:06:45.684167 | orchestrator | Saturday 11 April 2026 07:06:06 +0000 (0:00:03.240) 0:02:17.832 ******** 2026-04-11 07:06:45.684186 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.684268 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.684288 | orchestrator | 2026-04-11 07:06:45.684307 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-11 07:06:45.684325 | orchestrator | Saturday 11 April 2026 07:06:09 +0000 (0:00:03.569) 0:02:21.401 ******** 2026-04-11 07:06:45.684351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:06:45.684412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:06:45.684437 | orchestrator | 2026-04-11 07:06:45.684457 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-11 07:06:45.684502 | orchestrator | Saturday 11 April 2026 07:06:13 +0000 (0:00:04.130) 0:02:25.532 ******** 2026-04-11 07:06:45.684546 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:06:45.684580 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:06:45.684599 | orchestrator | 2026-04-11 07:06:45.684766 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-11 07:06:45.684801 | orchestrator | Saturday 11 April 2026 07:06:20 +0000 (0:00:06.248) 0:02:31.780 ******** 2026-04-11 07:06:45.684813 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.684824 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.684835 | orchestrator | 2026-04-11 07:06:45.684846 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-11 07:06:45.684858 | orchestrator | Saturday 11 April 2026 07:06:23 +0000 (0:00:03.320) 0:02:35.101 ******** 2026-04-11 07:06:45.684868 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.684879 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.684890 | orchestrator | 2026-04-11 07:06:45.684901 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-11 07:06:45.684924 | orchestrator | Saturday 11 April 2026 07:06:26 +0000 (0:00:03.315) 0:02:38.417 ******** 2026-04-11 07:06:45.684934 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.684945 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.684956 | orchestrator | 2026-04-11 07:06:45.684967 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-11 07:06:45.684977 | orchestrator | Saturday 11 April 2026 07:06:30 +0000 (0:00:03.501) 0:02:41.918 ******** 2026-04-11 07:06:45.684988 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.684999 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.685010 | orchestrator | 2026-04-11 07:06:45.685021 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-11 07:06:45.685031 | orchestrator | Saturday 11 April 2026 07:06:30 +0000 (0:00:00.259) 0:02:42.177 ******** 2026-04-11 07:06:45.685042 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-11 07:06:45.685054 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.685064 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-11 07:06:45.685075 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.685086 | orchestrator | 2026-04-11 07:06:45.685097 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-11 07:06:45.685107 | orchestrator | Saturday 11 April 2026 07:06:34 +0000 (0:00:03.699) 0:02:45.877 ******** 2026-04-11 07:06:45.685118 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.685129 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.685139 | orchestrator | 2026-04-11 07:06:45.685150 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-11 07:06:45.685161 | orchestrator | Saturday 11 April 2026 07:06:38 +0000 (0:00:03.767) 0:02:49.644 ******** 2026-04-11 07:06:45.685172 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:45.685182 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:45.685220 | orchestrator | 2026-04-11 07:06:45.685234 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-11 07:06:45.685244 | orchestrator | Saturday 11 April 2026 07:06:41 +0000 (0:00:03.686) 0:02:53.331 ******** 2026-04-11 07:06:45.685258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:06:45.685304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:06:50.139804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-11 07:06:50.139918 | orchestrator | 2026-04-11 07:06:50.139935 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-11 07:06:50.139948 | orchestrator | Saturday 11 April 2026 07:06:45 +0000 (0:00:04.186) 0:02:57.517 ******** 2026-04-11 07:06:50.139960 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:06:50.139972 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:06:50.139983 | orchestrator | } 2026-04-11 07:06:50.139994 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:06:50.140033 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:06:50.140045 | orchestrator | } 2026-04-11 07:06:50.140055 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:06:50.140066 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:06:50.140077 | orchestrator | } 2026-04-11 07:06:50.140088 | orchestrator | 2026-04-11 07:06:50.140099 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:06:50.140126 | orchestrator | Saturday 11 April 2026 07:06:46 +0000 (0:00:00.361) 0:02:57.879 ******** 2026-04-11 07:06:50.140159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:06:50.140173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:06:50.140195 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:06:50.140236 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:06:50.140256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-11 07:06:50.140271 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:06:50.140284 | orchestrator | 2026-04-11 07:06:50.140297 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-11 07:06:50.140310 | orchestrator | Saturday 11 April 2026 07:06:49 +0000 (0:00:03.721) 0:03:01.601 ******** 2026-04-11 07:06:50.140323 | orchestrator | 2026-04-11 07:06:50.140336 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-11 07:06:50.140348 | orchestrator | Saturday 11 April 2026 07:06:50 +0000 (0:00:00.079) 0:03:01.680 ******** 2026-04-11 07:06:50.140362 | orchestrator | 2026-04-11 07:06:50.140374 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-11 07:06:50.140394 | orchestrator | Saturday 11 April 2026 07:06:50 +0000 (0:00:00.072) 0:03:01.752 ******** 2026-04-11 07:07:49.840418 | orchestrator | 2026-04-11 07:07:49.840537 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-11 07:07:49.840554 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-11 07:07:49.840566 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-11 07:07:49.840589 | orchestrator | Saturday 11 April 2026 07:06:50 +0000 (0:00:00.072) 0:03:01.825 ******** 2026-04-11 07:07:49.840600 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:07:49.840612 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:07:49.840622 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:07:49.840633 | orchestrator | 2026-04-11 07:07:49.840645 | orchestrator | TASK [glance : Running Glance database contract container] ********************* 2026-04-11 07:07:49.840656 | orchestrator | Saturday 11 April 2026 07:07:29 +0000 (0:00:39.765) 0:03:41.591 ******** 2026-04-11 07:07:49.840667 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:07:49.840678 | orchestrator | 2026-04-11 07:07:49.840689 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-11 07:07:49.840700 | orchestrator | Saturday 11 April 2026 07:07:45 +0000 (0:00:15.421) 0:03:57.013 ******** 2026-04-11 07:07:49.840736 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:07:49.840747 | orchestrator | 2026-04-11 07:07:49.840758 | orchestrator | TASK [glance : Finish Glance upgrade] ****************************************** 2026-04-11 07:07:49.840769 | orchestrator | Saturday 11 April 2026 07:07:47 +0000 (0:00:02.441) 0:03:59.454 ******** 2026-04-11 07:07:49.840780 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:07:49.840792 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:07:49.840803 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:07:49.840813 | orchestrator | 2026-04-11 07:07:49.840824 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-11 07:07:49.840835 | orchestrator | Saturday 11 April 2026 07:07:48 +0000 (0:00:00.332) 0:03:59.787 ******** 2026-04-11 07:07:49.840846 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:07:49.840857 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:07:49.840868 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:07:49.840879 | orchestrator | 2026-04-11 07:07:49.840890 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:07:49.840901 | orchestrator | testbed-node-0 : ok=27  changed=11  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-11 07:07:49.840913 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-11 07:07:49.840924 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-11 07:07:49.840935 | orchestrator | 2026-04-11 07:07:49.840946 | orchestrator | 2026-04-11 07:07:49.840957 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:07:49.840968 | orchestrator | Saturday 11 April 2026 07:07:49 +0000 (0:00:01.242) 0:04:01.029 ******** 2026-04-11 07:07:49.840979 | orchestrator | =============================================================================== 2026-04-11 07:07:49.841005 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.77s 2026-04-11 07:07:49.841016 | orchestrator | glance : Running Glance database expand container ---------------------- 23.34s 2026-04-11 07:07:49.841027 | orchestrator | glance : Running Glance database contract container -------------------- 15.42s 2026-04-11 07:07:49.841038 | orchestrator | glance : Running Glance database migrate container --------------------- 15.11s 2026-04-11 07:07:49.841048 | orchestrator | glance : Stop glance service ------------------------------------------- 12.08s 2026-04-11 07:07:49.841059 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.25s 2026-04-11 07:07:49.841070 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.77s 2026-04-11 07:07:49.841081 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.20s 2026-04-11 07:07:49.841092 | orchestrator | service-check-containers : glance | Check containers -------------------- 4.19s 2026-04-11 07:07:49.841103 | orchestrator | glance : Copying over config.json files for services -------------------- 4.13s 2026-04-11 07:07:49.841114 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.09s 2026-04-11 07:07:49.841125 | orchestrator | glance : Copying over config.json files for services -------------------- 4.03s 2026-04-11 07:07:49.841135 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.83s 2026-04-11 07:07:49.841146 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.77s 2026-04-11 07:07:49.841157 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.75s 2026-04-11 07:07:49.841168 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.75s 2026-04-11 07:07:49.841179 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.72s 2026-04-11 07:07:49.841190 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.70s 2026-04-11 07:07:49.841200 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 3.69s 2026-04-11 07:07:49.841219 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.57s 2026-04-11 07:07:50.016821 | orchestrator | + osism apply -a upgrade cinder 2026-04-11 07:07:51.311585 | orchestrator | 2026-04-11 07:07:51 | INFO  | Prepare task for execution of cinder. 2026-04-11 07:07:51.375996 | orchestrator | 2026-04-11 07:07:51 | INFO  | Task 37c9d396-d537-4de3-859c-3bb8302a2333 (cinder) was prepared for execution. 2026-04-11 07:07:51.376076 | orchestrator | 2026-04-11 07:07:51 | INFO  | It takes a moment until task 37c9d396-d537-4de3-859c-3bb8302a2333 (cinder) has been started and output is visible here. 2026-04-11 07:08:15.460052 | orchestrator | 2026-04-11 07:08:15.460181 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:08:15.460204 | orchestrator | 2026-04-11 07:08:15.460218 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:08:15.460233 | orchestrator | Saturday 11 April 2026 07:07:56 +0000 (0:00:02.190) 0:00:02.190 ******** 2026-04-11 07:08:15.460284 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:08:15.460302 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:08:15.460318 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:08:15.460333 | orchestrator | 2026-04-11 07:08:15.460348 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:08:15.460363 | orchestrator | Saturday 11 April 2026 07:07:59 +0000 (0:00:02.273) 0:00:04.464 ******** 2026-04-11 07:08:15.460377 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-11 07:08:15.460391 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-11 07:08:15.460406 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-11 07:08:15.460421 | orchestrator | 2026-04-11 07:08:15.460436 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-11 07:08:15.460452 | orchestrator | 2026-04-11 07:08:15.460467 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 07:08:15.460483 | orchestrator | Saturday 11 April 2026 07:08:01 +0000 (0:00:02.305) 0:00:06.769 ******** 2026-04-11 07:08:15.460499 | orchestrator | included: /ansible/roles/cinder/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:08:15.460515 | orchestrator | 2026-04-11 07:08:15.460531 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 07:08:15.460547 | orchestrator | Saturday 11 April 2026 07:08:04 +0000 (0:00:03.029) 0:00:09.799 ******** 2026-04-11 07:08:15.460562 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:08:15.460578 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:08:15.460592 | orchestrator | included: /ansible/roles/cinder/tasks/config.yml for testbed-node-0 2026-04-11 07:08:15.460608 | orchestrator | 2026-04-11 07:08:15.460624 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-11 07:08:15.460640 | orchestrator | Saturday 11 April 2026 07:08:06 +0000 (0:00:01.919) 0:00:11.719 ******** 2026-04-11 07:08:15.460683 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:08:15.460740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:08:15.460761 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:08:15.460806 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:08:15.460826 | orchestrator | 2026-04-11 07:08:15.460842 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 07:08:15.460858 | orchestrator | Saturday 11 April 2026 07:08:09 +0000 (0:00:03.313) 0:00:15.032 ******** 2026-04-11 07:08:15.460870 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:08:15.460880 | orchestrator | 2026-04-11 07:08:15.460892 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 07:08:15.460903 | orchestrator | Saturday 11 April 2026 07:08:10 +0000 (0:00:01.115) 0:00:16.148 ******** 2026-04-11 07:08:15.460913 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0 2026-04-11 07:08:15.460923 | orchestrator | 2026-04-11 07:08:15.460934 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-11 07:08:15.460944 | orchestrator | Saturday 11 April 2026 07:08:12 +0000 (0:00:01.481) 0:00:17.629 ******** 2026-04-11 07:08:15.460955 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-11 07:08:15.460965 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-11 07:08:15.460975 | orchestrator | 2026-04-11 07:08:15.460983 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-11 07:08:15.460992 | orchestrator | Saturday 11 April 2026 07:08:14 +0000 (0:00:02.639) 0:00:20.269 ******** 2026-04-11 07:08:15.461009 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-11 07:08:15.461029 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-11 07:08:15.461050 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-11 07:08:35.412472 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-11 07:08:35.412570 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-11 07:08:35.412618 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-11 07:08:35.412631 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-11 07:08:35.412659 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-11 07:08:35.412671 | orchestrator | 2026-04-11 07:08:35.412683 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-11 07:08:35.412693 | orchestrator | Saturday 11 April 2026 07:08:21 +0000 (0:00:06.298) 0:00:26.568 ******** 2026-04-11 07:08:35.412700 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-11 07:08:35.412707 | orchestrator | 2026-04-11 07:08:35.412714 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-11 07:08:35.412720 | orchestrator | Saturday 11 April 2026 07:08:23 +0000 (0:00:02.294) 0:00:28.863 ******** 2026-04-11 07:08:35.412726 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-11 07:08:35.412733 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-11 07:08:35.412740 | orchestrator | 2026-04-11 07:08:35.412747 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-11 07:08:35.412753 | orchestrator | Saturday 11 April 2026 07:08:26 +0000 (0:00:03.423) 0:00:32.286 ******** 2026-04-11 07:08:35.412767 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-11 07:08:35.412774 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-11 07:08:35.412780 | orchestrator | 2026-04-11 07:08:35.412787 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-11 07:08:35.412793 | orchestrator | Saturday 11 April 2026 07:08:28 +0000 (0:00:01.823) 0:00:34.110 ******** 2026-04-11 07:08:35.412799 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:08:35.412806 | orchestrator | 2026-04-11 07:08:35.412812 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-11 07:08:35.412818 | orchestrator | Saturday 11 April 2026 07:08:29 +0000 (0:00:01.111) 0:00:35.221 ******** 2026-04-11 07:08:35.412825 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:08:35.412831 | orchestrator | 2026-04-11 07:08:35.412837 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 07:08:35.412848 | orchestrator | Saturday 11 April 2026 07:08:31 +0000 (0:00:01.169) 0:00:36.391 ******** 2026-04-11 07:08:35.412859 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0 2026-04-11 07:08:35.412870 | orchestrator | 2026-04-11 07:08:35.412880 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-11 07:08:35.412939 | orchestrator | Saturday 11 April 2026 07:08:32 +0000 (0:00:01.527) 0:00:37.918 ******** 2026-04-11 07:08:35.412952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:08:35.412968 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:08:35.412990 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:08:42.216533 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:08:42.216675 | orchestrator | 2026-04-11 07:08:42.216694 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-11 07:08:42.216707 | orchestrator | Saturday 11 April 2026 07:08:37 +0000 (0:00:04.728) 0:00:42.647 ******** 2026-04-11 07:08:42.216738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:08:42.216754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:08:42.216769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:08:42.216781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:08:42.216801 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:08:42.216814 | orchestrator | 2026-04-11 07:08:42.216843 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-11 07:08:42.216856 | orchestrator | Saturday 11 April 2026 07:08:39 +0000 (0:00:01.712) 0:00:44.360 ******** 2026-04-11 07:08:42.216868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:08:42.216886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:08:42.216899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:08:42.216911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:08:42.216922 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:08:42.216934 | orchestrator | 2026-04-11 07:08:42.216945 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-11 07:08:42.216956 | orchestrator | Saturday 11 April 2026 07:08:40 +0000 (0:00:01.707) 0:00:46.068 ******** 2026-04-11 07:08:42.216976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:09:09.620363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:09.620497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:09.620516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:09.620528 | orchestrator | 2026-04-11 07:09:09.620542 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-11 07:09:09.620555 | orchestrator | Saturday 11 April 2026 07:08:46 +0000 (0:00:05.268) 0:00:51.336 ******** 2026-04-11 07:09:09.620566 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-11 07:09:09.620578 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:09:09.620590 | orchestrator | 2026-04-11 07:09:09.620602 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-11 07:09:09.620613 | orchestrator | Saturday 11 April 2026 07:08:47 +0000 (0:00:01.487) 0:00:52.823 ******** 2026-04-11 07:09:09.620625 | orchestrator | included: service-uwsgi-config for testbed-node-0 2026-04-11 07:09:09.620637 | orchestrator | 2026-04-11 07:09:09.620648 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-11 07:09:09.620659 | orchestrator | Saturday 11 April 2026 07:08:49 +0000 (0:00:01.798) 0:00:54.622 ******** 2026-04-11 07:09:09.620692 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:09:09.620705 | orchestrator | 2026-04-11 07:09:09.620716 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-11 07:09:09.620726 | orchestrator | Saturday 11 April 2026 07:08:51 +0000 (0:00:02.546) 0:00:57.169 ******** 2026-04-11 07:09:09.620740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:09:09.620774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:09.620792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:09.620804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:09.620816 | orchestrator | 2026-04-11 07:09:09.620827 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-11 07:09:09.620839 | orchestrator | Saturday 11 April 2026 07:09:04 +0000 (0:00:12.254) 0:01:09.424 ******** 2026-04-11 07:09:09.620852 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:09:09.620865 | orchestrator | 2026-04-11 07:09:09.620878 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-11 07:09:09.620892 | orchestrator | Saturday 11 April 2026 07:09:06 +0000 (0:00:02.325) 0:01:11.750 ******** 2026-04-11 07:09:09.620914 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:09:09.620927 | orchestrator | 2026-04-11 07:09:09.620941 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-11 07:09:09.620954 | orchestrator | Saturday 11 April 2026 07:09:09 +0000 (0:00:02.544) 0:01:14.294 ******** 2026-04-11 07:09:09.620968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:09:09.620991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:09:49.772033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:09:49.772166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:09:49.772184 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:09:49.772199 | orchestrator | 2026-04-11 07:09:49.772211 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-11 07:09:49.772224 | orchestrator | Saturday 11 April 2026 07:09:10 +0000 (0:00:01.722) 0:01:16.016 ******** 2026-04-11 07:09:49.772235 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:09:49.772246 | orchestrator | 2026-04-11 07:09:49.772258 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-11 07:09:49.772290 | orchestrator | Saturday 11 April 2026 07:09:12 +0000 (0:00:01.484) 0:01:17.501 ******** 2026-04-11 07:09:49.772351 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:09:49.772362 | orchestrator | 2026-04-11 07:09:49.772373 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-11 07:09:49.772384 | orchestrator | Saturday 11 April 2026 07:09:47 +0000 (0:00:35.687) 0:01:53.189 ******** 2026-04-11 07:09:49.772399 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:09:49.772434 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:09:49.772449 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:49.772476 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:09:49.772510 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:49.772532 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:49.772553 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:49.772587 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:57.433238 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:57.433385 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:57.433419 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:57.433431 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:09:57.433442 | orchestrator | 2026-04-11 07:09:57.433453 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 07:09:57.433464 | orchestrator | Saturday 11 April 2026 07:09:51 +0000 (0:00:03.424) 0:01:56.613 ******** 2026-04-11 07:09:57.433474 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:09:57.433484 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:09:57.433494 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:09:57.433503 | orchestrator | 2026-04-11 07:09:57.433513 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 07:09:57.433523 | orchestrator | Saturday 11 April 2026 07:09:52 +0000 (0:00:01.324) 0:01:57.938 ******** 2026-04-11 07:09:57.433534 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:09:57.433543 | orchestrator | 2026-04-11 07:09:57.433553 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-11 07:09:57.433563 | orchestrator | Saturday 11 April 2026 07:09:54 +0000 (0:00:01.542) 0:01:59.480 ******** 2026-04-11 07:09:57.433573 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-11 07:09:57.433583 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-11 07:09:57.433592 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-11 07:09:57.433602 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-11 07:09:57.433611 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-11 07:09:57.433621 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-11 07:09:57.433630 | orchestrator | 2026-04-11 07:09:57.433640 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-11 07:09:57.433650 | orchestrator | Saturday 11 April 2026 07:09:56 +0000 (0:00:02.726) 0:02:02.206 ******** 2026-04-11 07:09:57.433688 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-11 07:09:57.433707 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-11 07:09:57.433720 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-11 07:09:57.433731 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-11 07:09:57.433754 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-11 07:09:58.726840 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-11 07:09:58.726976 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-11 07:09:58.726997 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-11 07:09:58.727032 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-11 07:09:58.727106 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-11 07:09:58.727122 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-11 07:09:58.727134 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-11 07:09:58.727147 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-11 07:09:58.727184 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-11 07:10:02.207903 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-11 07:10:02.207996 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-11 07:10:02.208009 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-11 07:10:02.208019 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-11 07:10:02.208079 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-11 07:10:02.208092 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-11 07:10:02.208102 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-11 07:10:02.208111 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-11 07:10:02.208121 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-11 07:10:02.208148 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-11 07:10:19.426785 | orchestrator | 2026-04-11 07:10:19.426899 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-11 07:10:19.426916 | orchestrator | Saturday 11 April 2026 07:10:03 +0000 (0:00:06.439) 0:02:08.646 ******** 2026-04-11 07:10:19.426928 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-11 07:10:19.426941 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-11 07:10:19.426953 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-11 07:10:19.426964 | orchestrator | 2026-04-11 07:10:19.426975 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-11 07:10:19.426986 | orchestrator | Saturday 11 April 2026 07:10:06 +0000 (0:00:02.858) 0:02:11.504 ******** 2026-04-11 07:10:19.426997 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-11 07:10:19.427008 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-11 07:10:19.427019 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-11 07:10:19.427030 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-11 07:10:19.427043 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-11 07:10:19.427054 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-11 07:10:19.427065 | orchestrator | 2026-04-11 07:10:19.427075 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-11 07:10:19.427086 | orchestrator | Saturday 11 April 2026 07:10:09 +0000 (0:00:03.783) 0:02:15.288 ******** 2026-04-11 07:10:19.427122 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-11 07:10:19.427135 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-11 07:10:19.427146 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-11 07:10:19.427157 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-11 07:10:19.427167 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-11 07:10:19.427178 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-11 07:10:19.427189 | orchestrator | 2026-04-11 07:10:19.427200 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-11 07:10:19.427211 | orchestrator | Saturday 11 April 2026 07:10:12 +0000 (0:00:02.349) 0:02:17.638 ******** 2026-04-11 07:10:19.427222 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:10:19.427233 | orchestrator | 2026-04-11 07:10:19.427244 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-11 07:10:19.427255 | orchestrator | Saturday 11 April 2026 07:10:13 +0000 (0:00:01.134) 0:02:18.772 ******** 2026-04-11 07:10:19.427265 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:10:19.427276 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:10:19.427287 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:10:19.427298 | orchestrator | 2026-04-11 07:10:19.427377 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-11 07:10:19.427394 | orchestrator | Saturday 11 April 2026 07:10:15 +0000 (0:00:01.594) 0:02:20.367 ******** 2026-04-11 07:10:19.427407 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:10:19.427420 | orchestrator | 2026-04-11 07:10:19.427433 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-11 07:10:19.427445 | orchestrator | Saturday 11 April 2026 07:10:16 +0000 (0:00:01.458) 0:02:21.825 ******** 2026-04-11 07:10:19.427497 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:19.427517 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:19.427541 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:19.427556 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:19.427576 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:19.427590 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:19.427611 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:22.517724 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:22.517863 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:22.517882 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:22.517910 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:22.517922 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:22.517935 | orchestrator | 2026-04-11 07:10:22.517948 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-11 07:10:22.517961 | orchestrator | Saturday 11 April 2026 07:10:21 +0000 (0:00:05.259) 0:02:27.085 ******** 2026-04-11 07:10:22.517996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:22.518089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:10:22.518139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:10:22.518213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:10:22.518232 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:10:22.518253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:22.518290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316216 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:10:24.316247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:24.316261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316403 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:10:24.316415 | orchestrator | 2026-04-11 07:10:24.316426 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-11 07:10:24.316437 | orchestrator | Saturday 11 April 2026 07:10:23 +0000 (0:00:01.964) 0:02:29.049 ******** 2026-04-11 07:10:24.316448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:24.316465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:10:24.316505 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:10:24.316524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:27.225174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:10:27.225306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:10:27.225378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:10:27.225423 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:10:27.225443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:27.225460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:10:27.225497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:10:27.225513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:10:27.225528 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:10:27.225542 | orchestrator | 2026-04-11 07:10:27.225557 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-11 07:10:27.225578 | orchestrator | Saturday 11 April 2026 07:10:25 +0000 (0:00:01.725) 0:02:30.775 ******** 2026-04-11 07:10:27.225593 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:27.225617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:27.225644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:40.606485 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606669 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606724 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:40.606774 | orchestrator | 2026-04-11 07:10:40.606786 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-11 07:10:40.606799 | orchestrator | Saturday 11 April 2026 07:10:30 +0000 (0:00:05.509) 0:02:36.285 ******** 2026-04-11 07:10:40.606810 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-11 07:10:40.606823 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:10:40.606835 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-11 07:10:40.606846 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:10:40.606857 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-11 07:10:40.606867 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:10:40.606878 | orchestrator | 2026-04-11 07:10:40.606890 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-11 07:10:40.606900 | orchestrator | Saturday 11 April 2026 07:10:32 +0000 (0:00:01.805) 0:02:38.090 ******** 2026-04-11 07:10:40.606911 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:10:40.606923 | orchestrator | 2026-04-11 07:10:40.606934 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-11 07:10:40.606945 | orchestrator | Saturday 11 April 2026 07:10:34 +0000 (0:00:01.789) 0:02:39.880 ******** 2026-04-11 07:10:40.606955 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:10:40.606967 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:10:40.606980 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:10:40.606992 | orchestrator | 2026-04-11 07:10:40.607005 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-11 07:10:40.607018 | orchestrator | Saturday 11 April 2026 07:10:37 +0000 (0:00:02.983) 0:02:42.863 ******** 2026-04-11 07:10:40.607041 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:49.752637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:49.752799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:10:49.752832 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:49.752853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:49.752865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:49.752917 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:49.752932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:49.752945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:49.752957 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:49.752969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:49.752987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:10:56.950490 | orchestrator | 2026-04-11 07:10:56.950627 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-11 07:10:56.950646 | orchestrator | Saturday 11 April 2026 07:10:50 +0000 (0:00:13.233) 0:02:56.097 ******** 2026-04-11 07:10:56.950658 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:10:56.950670 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:10:56.950682 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:10:56.950693 | orchestrator | 2026-04-11 07:10:56.950705 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-11 07:10:56.950716 | orchestrator | Saturday 11 April 2026 07:10:53 +0000 (0:00:02.726) 0:02:58.823 ******** 2026-04-11 07:10:56.950745 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:10:56.950757 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:10:56.950769 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:10:56.950787 | orchestrator | 2026-04-11 07:10:56.950806 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-11 07:10:56.950825 | orchestrator | Saturday 11 April 2026 07:10:56 +0000 (0:00:02.832) 0:03:01.656 ******** 2026-04-11 07:10:56.950849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:56.950877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:10:56.950901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:10:56.950947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:10:56.950970 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:10:56.951028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:56.951054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:10:56.951076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:10:56.951095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:10:56.951116 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:10:56.951153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:10:56.951195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:11:03.065965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:11:03.066129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:11:03.066148 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:11:03.066162 | orchestrator | 2026-04-11 07:11:03.066175 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-11 07:11:03.066186 | orchestrator | Saturday 11 April 2026 07:10:58 +0000 (0:00:01.744) 0:03:03.401 ******** 2026-04-11 07:11:03.066197 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:11:03.066208 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:11:03.066219 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:11:03.066230 | orchestrator | 2026-04-11 07:11:03.066240 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-11 07:11:03.066251 | orchestrator | Saturday 11 April 2026 07:10:59 +0000 (0:00:01.735) 0:03:05.137 ******** 2026-04-11 07:11:03.066265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:11:03.066319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:11:03.066405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:11:03.066422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:03.066435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:03.066456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:03.066468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:03.066493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:06.948394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:06.948517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:06.948560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:06.948605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:11:06.948618 | orchestrator | 2026-04-11 07:11:06.948631 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-11 07:11:06.948644 | orchestrator | Saturday 11 April 2026 07:11:05 +0000 (0:00:05.168) 0:03:10.305 ******** 2026-04-11 07:11:06.948656 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:11:06.948668 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:11:06.948679 | orchestrator | } 2026-04-11 07:11:06.948690 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:11:06.948701 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:11:06.948711 | orchestrator | } 2026-04-11 07:11:06.948722 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:11:06.948732 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:11:06.948743 | orchestrator | } 2026-04-11 07:11:06.948757 | orchestrator | 2026-04-11 07:11:06.948771 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:11:06.948783 | orchestrator | Saturday 11 April 2026 07:11:06 +0000 (0:00:01.454) 0:03:11.760 ******** 2026-04-11 07:11:06.948837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:11:06.948857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:11:06.948879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:11:06.948893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:11:06.948906 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:11:06.948921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:11:06.948950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:13:32.143203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:13:32.143334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:13:32.143351 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:13:32.143363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:13:32.143373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:13:32.143395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-11 07:13:32.143464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-11 07:13:32.143480 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:13:32.143488 | orchestrator | 2026-04-11 07:13:32.143496 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-11 07:13:32.143505 | orchestrator | Saturday 11 April 2026 07:11:08 +0000 (0:00:01.722) 0:03:13.483 ******** 2026-04-11 07:13:32.143512 | orchestrator | 2026-04-11 07:13:32.143519 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-11 07:13:32.143525 | orchestrator | Saturday 11 April 2026 07:11:08 +0000 (0:00:00.462) 0:03:13.945 ******** 2026-04-11 07:13:32.143532 | orchestrator | 2026-04-11 07:13:32.143540 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-11 07:13:32.143546 | orchestrator | Saturday 11 April 2026 07:11:09 +0000 (0:00:00.637) 0:03:14.583 ******** 2026-04-11 07:13:32.143553 | orchestrator | 2026-04-11 07:13:32.143560 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-11 07:13:32.143567 | orchestrator | Saturday 11 April 2026 07:11:10 +0000 (0:00:00.821) 0:03:15.404 ******** 2026-04-11 07:13:32.143573 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:13:32.143579 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:13:32.143585 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:13:32.143591 | orchestrator | 2026-04-11 07:13:32.143598 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-11 07:13:32.143604 | orchestrator | Saturday 11 April 2026 07:11:43 +0000 (0:00:33.221) 0:03:48.626 ******** 2026-04-11 07:13:32.143610 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:13:32.143617 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:13:32.143623 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:13:32.143630 | orchestrator | 2026-04-11 07:13:32.143636 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-11 07:13:32.143643 | orchestrator | Saturday 11 April 2026 07:11:56 +0000 (0:00:13.013) 0:04:01.640 ******** 2026-04-11 07:13:32.143650 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:13:32.143656 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:13:32.143663 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:13:32.143669 | orchestrator | 2026-04-11 07:13:32.143675 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-11 07:13:32.143681 | orchestrator | Saturday 11 April 2026 07:12:35 +0000 (0:00:39.640) 0:04:41.280 ******** 2026-04-11 07:13:32.143687 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:13:32.143693 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:13:32.143700 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:13:32.143706 | orchestrator | 2026-04-11 07:13:32.143712 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-11 07:13:32.143719 | orchestrator | Saturday 11 April 2026 07:12:54 +0000 (0:00:18.648) 0:04:59.929 ******** 2026-04-11 07:13:32.143725 | orchestrator | Pausing for 30 seconds 2026-04-11 07:13:32.143733 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:13:32.143740 | orchestrator | 2026-04-11 07:13:32.143747 | orchestrator | TASK [cinder : Reload cinder services to remove RPC version pin] *************** 2026-04-11 07:13:32.143754 | orchestrator | Saturday 11 April 2026 07:13:26 +0000 (0:00:31.511) 0:05:31.440 ******** 2026-04-11 07:13:32.143767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:13:32.143790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:15.391631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:15.391790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:15.391983 | orchestrator | 2026-04-11 07:14:15.391997 | orchestrator | TASK [cinder : Running Cinder online schema migration] ************************* 2026-04-11 07:14:15.392010 | orchestrator | Saturday 11 April 2026 07:14:00 +0000 (0:00:34.373) 0:06:05.814 ******** 2026-04-11 07:14:15.392021 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:14:15.392033 | orchestrator | 2026-04-11 07:14:15.392044 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:14:15.392057 | orchestrator | testbed-node-0 : ok=44  changed=13  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 07:14:15.392069 | orchestrator | testbed-node-1 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 07:14:15.392081 | orchestrator | testbed-node-2 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 07:14:15.392092 | orchestrator | 2026-04-11 07:14:15.392101 | orchestrator | 2026-04-11 07:14:15.392111 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:14:15.392128 | orchestrator | Saturday 11 April 2026 07:14:15 +0000 (0:00:14.843) 0:06:20.658 ******** 2026-04-11 07:14:15.805140 | orchestrator | =============================================================================== 2026-04-11 07:14:15.805244 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 39.64s 2026-04-11 07:14:15.805260 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 35.69s 2026-04-11 07:14:15.805272 | orchestrator | cinder : Reload cinder services to remove RPC version pin -------------- 34.37s 2026-04-11 07:14:15.805284 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 33.22s 2026-04-11 07:14:15.805295 | orchestrator | cinder : Wait for cinder services to update service versions ----------- 31.51s 2026-04-11 07:14:15.805305 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 18.65s 2026-04-11 07:14:15.805316 | orchestrator | cinder : Running Cinder online schema migration ------------------------ 14.84s 2026-04-11 07:14:15.805327 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.23s 2026-04-11 07:14:15.805338 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.01s 2026-04-11 07:14:15.805349 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.25s 2026-04-11 07:14:15.805359 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.44s 2026-04-11 07:14:15.805370 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.30s 2026-04-11 07:14:15.805381 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.51s 2026-04-11 07:14:15.805391 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.27s 2026-04-11 07:14:15.805402 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.26s 2026-04-11 07:14:15.805413 | orchestrator | service-check-containers : cinder | Check containers -------------------- 5.17s 2026-04-11 07:14:15.805424 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.73s 2026-04-11 07:14:15.805530 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.78s 2026-04-11 07:14:15.805543 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.42s 2026-04-11 07:14:15.805554 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.42s 2026-04-11 07:14:16.002935 | orchestrator | + osism apply -a upgrade barbican 2026-04-11 07:14:17.309260 | orchestrator | 2026-04-11 07:14:17 | INFO  | Prepare task for execution of barbican. 2026-04-11 07:14:17.376882 | orchestrator | 2026-04-11 07:14:17 | INFO  | Task 971e2cac-deee-4ea4-8c83-39daa5b9fd4b (barbican) was prepared for execution. 2026-04-11 07:14:17.376969 | orchestrator | 2026-04-11 07:14:17 | INFO  | It takes a moment until task 971e2cac-deee-4ea4-8c83-39daa5b9fd4b (barbican) has been started and output is visible here. 2026-04-11 07:14:31.453764 | orchestrator | 2026-04-11 07:14:31.453873 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:14:31.453888 | orchestrator | 2026-04-11 07:14:31.453899 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:14:31.453910 | orchestrator | Saturday 11 April 2026 07:14:22 +0000 (0:00:01.605) 0:00:01.605 ******** 2026-04-11 07:14:31.453920 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:14:31.453930 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:14:31.453940 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:14:31.453949 | orchestrator | 2026-04-11 07:14:31.453959 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:14:31.453969 | orchestrator | Saturday 11 April 2026 07:14:24 +0000 (0:00:01.835) 0:00:03.440 ******** 2026-04-11 07:14:31.453978 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-11 07:14:31.453988 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-11 07:14:31.453998 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-11 07:14:31.454008 | orchestrator | 2026-04-11 07:14:31.454075 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-11 07:14:31.454085 | orchestrator | 2026-04-11 07:14:31.454110 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-11 07:14:31.454120 | orchestrator | Saturday 11 April 2026 07:14:25 +0000 (0:00:01.756) 0:00:05.197 ******** 2026-04-11 07:14:31.454130 | orchestrator | included: /ansible/roles/barbican/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:14:31.454141 | orchestrator | 2026-04-11 07:14:31.454150 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-11 07:14:31.454160 | orchestrator | Saturday 11 April 2026 07:14:29 +0000 (0:00:03.167) 0:00:08.364 ******** 2026-04-11 07:14:31.454175 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:31.454190 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:31.454243 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:31.454262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:31.454275 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:31.454286 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:31.454305 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:31.454317 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:31.454337 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:41.920085 | orchestrator | 2026-04-11 07:14:41.920200 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-11 07:14:41.920218 | orchestrator | Saturday 11 April 2026 07:14:32 +0000 (0:00:03.423) 0:00:11.788 ******** 2026-04-11 07:14:41.920231 | orchestrator | ok: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-11 07:14:41.920243 | orchestrator | ok: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-11 07:14:41.920254 | orchestrator | ok: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-11 07:14:41.920264 | orchestrator | 2026-04-11 07:14:41.920276 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-11 07:14:41.920287 | orchestrator | Saturday 11 April 2026 07:14:34 +0000 (0:00:01.931) 0:00:13.720 ******** 2026-04-11 07:14:41.920298 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:14:41.920309 | orchestrator | 2026-04-11 07:14:41.920320 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-11 07:14:41.920330 | orchestrator | Saturday 11 April 2026 07:14:35 +0000 (0:00:01.189) 0:00:14.910 ******** 2026-04-11 07:14:41.920360 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:14:41.920372 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:14:41.920382 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:14:41.920393 | orchestrator | 2026-04-11 07:14:41.920404 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-11 07:14:41.920434 | orchestrator | Saturday 11 April 2026 07:14:37 +0000 (0:00:01.552) 0:00:16.463 ******** 2026-04-11 07:14:41.920496 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:14:41.920510 | orchestrator | 2026-04-11 07:14:41.920521 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-11 07:14:41.920533 | orchestrator | Saturday 11 April 2026 07:14:38 +0000 (0:00:01.719) 0:00:18.182 ******** 2026-04-11 07:14:41.920549 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:41.920591 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:41.920627 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:41.920653 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:41.920676 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:41.920707 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:41.920727 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:41.920748 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:41.920781 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:14:45.242584 | orchestrator | 2026-04-11 07:14:45.242695 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-11 07:14:45.242711 | orchestrator | Saturday 11 April 2026 07:14:42 +0000 (0:00:04.007) 0:00:22.190 ******** 2026-04-11 07:14:45.242745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:14:45.242785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:14:45.242798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:14:45.242810 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:14:45.242823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:14:45.242854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:14:45.242872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:14:45.242891 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:14:45.242903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:14:45.242915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:14:45.242926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:14:45.242937 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:14:45.242949 | orchestrator | 2026-04-11 07:14:45.242960 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-11 07:14:45.242971 | orchestrator | Saturday 11 April 2026 07:14:44 +0000 (0:00:01.855) 0:00:24.046 ******** 2026-04-11 07:14:45.242990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:14:48.070090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:14:48.070244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:14:48.070265 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:14:48.070282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:14:48.070297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:14:48.070310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:14:48.070321 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:14:48.070362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:14:48.070384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:14:48.070396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:14:48.070407 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:14:48.070419 | orchestrator | 2026-04-11 07:14:48.070432 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-11 07:14:48.070444 | orchestrator | Saturday 11 April 2026 07:14:46 +0000 (0:00:01.747) 0:00:25.793 ******** 2026-04-11 07:14:48.070528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:14:48.070582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:15:01.282289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:15:01.282402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:01.282420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:01.282434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:01.282447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:01.282570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:01.282586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:01.282599 | orchestrator | 2026-04-11 07:15:01.282612 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-11 07:15:01.282625 | orchestrator | Saturday 11 April 2026 07:14:51 +0000 (0:00:04.445) 0:00:30.239 ******** 2026-04-11 07:15:01.282636 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:15:01.282648 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:15:01.282659 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:15:01.282671 | orchestrator | 2026-04-11 07:15:01.282682 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-11 07:15:01.282694 | orchestrator | Saturday 11 April 2026 07:14:53 +0000 (0:00:02.518) 0:00:32.758 ******** 2026-04-11 07:15:01.282705 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:15:01.282716 | orchestrator | 2026-04-11 07:15:01.282728 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-11 07:15:01.282739 | orchestrator | Saturday 11 April 2026 07:14:55 +0000 (0:00:02.390) 0:00:35.149 ******** 2026-04-11 07:15:01.282750 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:15:01.282761 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:15:01.282773 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:15:01.282784 | orchestrator | 2026-04-11 07:15:01.282795 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-11 07:15:01.282806 | orchestrator | Saturday 11 April 2026 07:14:57 +0000 (0:00:01.644) 0:00:36.794 ******** 2026-04-11 07:15:01.282819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:15:01.282840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:15:01.282868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:15:06.839015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:06.839112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:06.839130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:06.839165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:06.839191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:06.839204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:06.839217 | orchestrator | 2026-04-11 07:15:06.839231 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-11 07:15:06.839261 | orchestrator | Saturday 11 April 2026 07:15:06 +0000 (0:00:08.700) 0:00:45.495 ******** 2026-04-11 07:15:06.839276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:15:06.839292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:15:06.839313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:15:06.839326 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:15:06.839344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:15:06.839366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:15:10.470651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:15:10.470713 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:15:10.470726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:15:10.470753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:15:10.470761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:15:10.470777 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:15:10.470786 | orchestrator | 2026-04-11 07:15:10.470793 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-11 07:15:10.470801 | orchestrator | Saturday 11 April 2026 07:15:08 +0000 (0:00:02.074) 0:00:47.569 ******** 2026-04-11 07:15:10.470820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:15:10.470830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:15:10.470843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:15:10.470851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:10.470862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:10.470875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:14.566361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:14.566536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:14.566555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:15:14.566567 | orchestrator | 2026-04-11 07:15:14.566580 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-11 07:15:14.566593 | orchestrator | Saturday 11 April 2026 07:15:12 +0000 (0:00:04.013) 0:00:51.582 ******** 2026-04-11 07:15:14.566604 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:15:14.566616 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:15:14.566627 | orchestrator | } 2026-04-11 07:15:14.566638 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:15:14.566649 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:15:14.566660 | orchestrator | } 2026-04-11 07:15:14.566671 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:15:14.566682 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:15:14.566692 | orchestrator | } 2026-04-11 07:15:14.566703 | orchestrator | 2026-04-11 07:15:14.566720 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:15:14.566738 | orchestrator | Saturday 11 April 2026 07:15:13 +0000 (0:00:01.368) 0:00:52.951 ******** 2026-04-11 07:15:14.566774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:15:14.566808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:15:14.566830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:15:14.566842 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:15:14.566854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:15:14.566867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:15:14.566884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:15:14.566896 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:15:14.566919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:18:15.645959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:18:15.646143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:18:15.646166 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:18:15.646183 | orchestrator | 2026-04-11 07:18:15.646199 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-11 07:18:15.646214 | orchestrator | Saturday 11 April 2026 07:15:16 +0000 (0:00:02.466) 0:00:55.418 ******** 2026-04-11 07:18:15.646229 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:18:15.646242 | orchestrator | 2026-04-11 07:18:15.646256 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-11 07:18:15.646270 | orchestrator | Saturday 11 April 2026 07:15:29 +0000 (0:00:12.918) 0:01:08.336 ******** 2026-04-11 07:18:15.646285 | orchestrator | 2026-04-11 07:18:15.646300 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-11 07:18:15.646314 | orchestrator | Saturday 11 April 2026 07:15:29 +0000 (0:00:00.420) 0:01:08.757 ******** 2026-04-11 07:18:15.646328 | orchestrator | 2026-04-11 07:18:15.646342 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-11 07:18:15.646356 | orchestrator | Saturday 11 April 2026 07:15:29 +0000 (0:00:00.421) 0:01:09.178 ******** 2026-04-11 07:18:15.646369 | orchestrator | 2026-04-11 07:18:15.646384 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-11 07:18:15.646392 | orchestrator | Saturday 11 April 2026 07:15:30 +0000 (0:00:00.839) 0:01:10.017 ******** 2026-04-11 07:18:15.646400 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:18:15.646408 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:18:15.646416 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:18:15.646424 | orchestrator | 2026-04-11 07:18:15.646432 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-11 07:18:15.646440 | orchestrator | Saturday 11 April 2026 07:17:45 +0000 (0:02:14.249) 0:03:24.267 ******** 2026-04-11 07:18:15.646449 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:18:15.646457 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:18:15.646490 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:18:15.646501 | orchestrator | 2026-04-11 07:18:15.646525 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-11 07:18:15.646535 | orchestrator | Saturday 11 April 2026 07:17:57 +0000 (0:00:12.506) 0:03:36.774 ******** 2026-04-11 07:18:15.646566 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:18:15.646576 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:18:15.646585 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:18:15.646594 | orchestrator | 2026-04-11 07:18:15.646603 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:18:15.646614 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 07:18:15.646624 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:18:15.646634 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:18:15.646642 | orchestrator | 2026-04-11 07:18:15.646651 | orchestrator | 2026-04-11 07:18:15.646660 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:18:15.646670 | orchestrator | Saturday 11 April 2026 07:18:15 +0000 (0:00:17.649) 0:03:54.423 ******** 2026-04-11 07:18:15.646679 | orchestrator | =============================================================================== 2026-04-11 07:18:15.646688 | orchestrator | barbican : Restart barbican-api container ----------------------------- 134.25s 2026-04-11 07:18:15.646697 | orchestrator | barbican : Restart barbican-worker container --------------------------- 17.65s 2026-04-11 07:18:15.646706 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.92s 2026-04-11 07:18:15.646715 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.51s 2026-04-11 07:18:15.646724 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.70s 2026-04-11 07:18:15.646748 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.45s 2026-04-11 07:18:15.646758 | orchestrator | service-check-containers : barbican | Check containers ------------------ 4.01s 2026-04-11 07:18:15.646767 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.01s 2026-04-11 07:18:15.646777 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.42s 2026-04-11 07:18:15.646786 | orchestrator | barbican : include_tasks ------------------------------------------------ 3.17s 2026-04-11 07:18:15.646794 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.52s 2026-04-11 07:18:15.646803 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.47s 2026-04-11 07:18:15.646813 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.39s 2026-04-11 07:18:15.646821 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.07s 2026-04-11 07:18:15.646831 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.93s 2026-04-11 07:18:15.646840 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.86s 2026-04-11 07:18:15.646848 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.84s 2026-04-11 07:18:15.646856 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.76s 2026-04-11 07:18:15.646864 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.75s 2026-04-11 07:18:15.646872 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.72s 2026-04-11 07:18:15.828698 | orchestrator | + osism apply -a upgrade designate 2026-04-11 07:18:17.111133 | orchestrator | 2026-04-11 07:18:17 | INFO  | Prepare task for execution of designate. 2026-04-11 07:18:17.182267 | orchestrator | 2026-04-11 07:18:17 | INFO  | Task a7756df5-eaae-419e-9429-960aa6e93665 (designate) was prepared for execution. 2026-04-11 07:18:17.182385 | orchestrator | 2026-04-11 07:18:17 | INFO  | It takes a moment until task a7756df5-eaae-419e-9429-960aa6e93665 (designate) has been started and output is visible here. 2026-04-11 07:18:27.027860 | orchestrator | 2026-04-11 07:18:27.028024 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:18:27.028046 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-11 07:18:27.028063 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-11 07:18:27.028104 | orchestrator | 2026-04-11 07:18:27.028116 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:18:27.028127 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-11 07:18:27.028138 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-11 07:18:27.028160 | orchestrator | Saturday 11 April 2026 07:18:21 +0000 (0:00:01.443) 0:00:01.443 ******** 2026-04-11 07:18:27.028171 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:18:27.028183 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:18:27.028194 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:18:27.028205 | orchestrator | 2026-04-11 07:18:27.028216 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:18:27.028242 | orchestrator | Saturday 11 April 2026 07:18:22 +0000 (0:00:00.630) 0:00:02.074 ******** 2026-04-11 07:18:27.028254 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-11 07:18:27.028265 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-11 07:18:27.028276 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-11 07:18:27.028287 | orchestrator | 2026-04-11 07:18:27.028304 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-11 07:18:27.028322 | orchestrator | 2026-04-11 07:18:27.028373 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-11 07:18:27.028386 | orchestrator | Saturday 11 April 2026 07:18:23 +0000 (0:00:01.059) 0:00:03.133 ******** 2026-04-11 07:18:27.028457 | orchestrator | included: /ansible/roles/designate/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:18:27.028471 | orchestrator | 2026-04-11 07:18:27.028483 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-11 07:18:27.028493 | orchestrator | Saturday 11 April 2026 07:18:24 +0000 (0:00:01.102) 0:00:04.236 ******** 2026-04-11 07:18:27.028508 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:27.028528 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:27.028578 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:27.028611 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:27.028624 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:27.028636 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:27.028648 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:27.028668 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:27.028688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191614 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191730 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191747 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191760 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191773 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191807 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191841 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191862 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191874 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:31.191887 | orchestrator | 2026-04-11 07:18:31.191900 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-11 07:18:31.191913 | orchestrator | Saturday 11 April 2026 07:18:28 +0000 (0:00:03.455) 0:00:07.691 ******** 2026-04-11 07:18:31.191924 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:18:31.191936 | orchestrator | 2026-04-11 07:18:31.191947 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-11 07:18:31.191958 | orchestrator | Saturday 11 April 2026 07:18:28 +0000 (0:00:00.139) 0:00:07.830 ******** 2026-04-11 07:18:31.191969 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:18:31.191980 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:18:31.191990 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:18:31.192001 | orchestrator | 2026-04-11 07:18:31.192012 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-11 07:18:31.192030 | orchestrator | Saturday 11 April 2026 07:18:28 +0000 (0:00:00.293) 0:00:08.124 ******** 2026-04-11 07:18:31.192041 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:18:31.192052 | orchestrator | 2026-04-11 07:18:31.192062 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-11 07:18:31.192073 | orchestrator | Saturday 11 April 2026 07:18:29 +0000 (0:00:01.155) 0:00:09.279 ******** 2026-04-11 07:18:31.192085 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:31.192109 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:34.432190 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:34.432307 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432454 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432473 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432485 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432519 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432551 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432572 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432584 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432608 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:34.432633 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:36.401737 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:36.401872 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:36.401896 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:36.401917 | orchestrator | 2026-04-11 07:18:36.401939 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-11 07:18:36.401961 | orchestrator | Saturday 11 April 2026 07:18:35 +0000 (0:00:05.634) 0:00:14.913 ******** 2026-04-11 07:18:36.401978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:18:36.401995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:18:36.402104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:18:36.402131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:18:36.402145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:18:36.402156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:18:36.402168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:18:36.402193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:18:37.622235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622261 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:18:37.622275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622448 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:18:37.622460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:18:37.622483 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:18:37.622494 | orchestrator | 2026-04-11 07:18:37.622506 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-11 07:18:37.622519 | orchestrator | Saturday 11 April 2026 07:18:36 +0000 (0:00:01.459) 0:00:16.373 ******** 2026-04-11 07:18:37.622532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:18:37.622563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:18:38.029487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:18:38.029583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:18:38.029595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:18:38.029606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:18:38.029638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:18:38.029685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:18:38.029695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:18:38.029704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:18:38.029713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:18:38.029722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:18:38.029735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:18:38.029751 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:18:38.029768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:18:41.856832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:18:41.856948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:18:41.856965 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:18:41.856979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:18:41.856993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:18:41.857005 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:18:41.857017 | orchestrator | 2026-04-11 07:18:41.857029 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-11 07:18:41.857066 | orchestrator | Saturday 11 April 2026 07:18:38 +0000 (0:00:01.767) 0:00:18.141 ******** 2026-04-11 07:18:41.857095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:41.857134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:41.857148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:41.857162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:41.857175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:41.857199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:41.857221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:47.611519 | orchestrator | 2026-04-11 07:18:47.611532 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-11 07:18:47.611550 | orchestrator | Saturday 11 April 2026 07:18:44 +0000 (0:00:06.038) 0:00:24.179 ******** 2026-04-11 07:18:47.611563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:47.611589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:57.289351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:18:57.289498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:18:57.289701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:07.730393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:07.730507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:07.730540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:07.730554 | orchestrator | 2026-04-11 07:19:07.730568 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-11 07:19:07.730581 | orchestrator | Saturday 11 April 2026 07:18:59 +0000 (0:00:15.289) 0:00:39.468 ******** 2026-04-11 07:19:07.730592 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-11 07:19:07.730604 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-11 07:19:07.730614 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-11 07:19:07.730625 | orchestrator | 2026-04-11 07:19:07.730636 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-11 07:19:07.730647 | orchestrator | Saturday 11 April 2026 07:19:03 +0000 (0:00:03.829) 0:00:43.298 ******** 2026-04-11 07:19:07.730659 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-11 07:19:07.730670 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-11 07:19:07.730680 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-11 07:19:07.730691 | orchestrator | 2026-04-11 07:19:07.730702 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-11 07:19:07.730712 | orchestrator | Saturday 11 April 2026 07:19:06 +0000 (0:00:02.569) 0:00:45.868 ******** 2026-04-11 07:19:07.730725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:07.730783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:07.730804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:07.730818 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:07.730831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:07.730843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:07.730871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:09.913965 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:09.914221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:09.914251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:09.914264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:09.914275 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:09.914310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:09.914344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:09.914357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:09.914375 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:09.914387 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:09.914399 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:09.914436 | orchestrator | 2026-04-11 07:19:09.914450 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-11 07:19:09.914474 | orchestrator | Saturday 11 April 2026 07:19:09 +0000 (0:00:02.920) 0:00:48.788 ******** 2026-04-11 07:19:09.914494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:11.047431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:11.047560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:11.047579 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:11.047617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:11.047631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:11.047662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:11.047675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:11.047691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:11.047704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:11.047727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:11.047739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:11.047759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:12.868327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:12.868543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:12.868580 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:12.868617 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:12.868630 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:12.868642 | orchestrator | 2026-04-11 07:19:12.868656 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-11 07:19:12.868668 | orchestrator | Saturday 11 April 2026 07:19:11 +0000 (0:00:02.744) 0:00:51.533 ******** 2026-04-11 07:19:12.868679 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:19:12.868691 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:19:12.868702 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:19:12.868713 | orchestrator | 2026-04-11 07:19:12.868731 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-11 07:19:12.868751 | orchestrator | Saturday 11 April 2026 07:19:12 +0000 (0:00:00.328) 0:00:51.862 ******** 2026-04-11 07:19:12.868796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:12.868843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:19:12.868867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:12.868899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:12.868919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:12.868939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:19:12.868957 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:19:12.868993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:14.892957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:19:14.893087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:14.893106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:14.893182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:14.893196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:19:14.893209 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:19:14.893241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:14.893265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:19:14.893287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:14.893298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:14.893310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:14.893321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:19:14.893333 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:19:14.893349 | orchestrator | 2026-04-11 07:19:14.893370 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-11 07:19:14.893389 | orchestrator | Saturday 11 April 2026 07:19:13 +0000 (0:00:01.174) 0:00:53.036 ******** 2026-04-11 07:19:14.893427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:19:18.104733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:19:18.104856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:19:18.104873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:18.104885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:18.104910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-11 07:19:18.105005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:18.105020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:18.105029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:18.105039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:18.105049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:18.105058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:18.105086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:20.909049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:20.909195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:20.909210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:20.909220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:20.909228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:19:20.909264 | orchestrator | 2026-04-11 07:19:20.909274 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-11 07:19:20.909282 | orchestrator | Saturday 11 April 2026 07:19:19 +0000 (0:00:06.285) 0:00:59.321 ******** 2026-04-11 07:19:20.909291 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:19:20.909299 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:19:20.909306 | orchestrator | } 2026-04-11 07:19:20.909314 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:19:20.909321 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:19:20.909328 | orchestrator | } 2026-04-11 07:19:20.909335 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:19:20.909343 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:19:20.909350 | orchestrator | } 2026-04-11 07:19:20.909357 | orchestrator | 2026-04-11 07:19:20.909376 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:19:20.909384 | orchestrator | Saturday 11 April 2026 07:19:20 +0000 (0:00:00.591) 0:00:59.913 ******** 2026-04-11 07:19:20.909409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:20.909421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:19:20.909429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:20.909437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:20.909451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:20.909463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:19:20.909471 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:19:20.909486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:36.688266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:19:36.688385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:36.688427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:36.688443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:36.688469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:19:36.688482 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:19:36.688516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:19:36.688534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-11 07:19:36.688547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-11 07:19:36.688568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-11 07:19:36.688580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-11 07:19:36.688597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:19:36.688610 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:19:36.688622 | orchestrator | 2026-04-11 07:19:36.688635 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-11 07:19:36.688648 | orchestrator | Saturday 11 April 2026 07:19:21 +0000 (0:00:01.391) 0:01:01.304 ******** 2026-04-11 07:19:36.688659 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:19:36.688670 | orchestrator | 2026-04-11 07:19:36.688682 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-11 07:19:36.688693 | orchestrator | Saturday 11 April 2026 07:19:36 +0000 (0:00:14.567) 0:01:15.871 ******** 2026-04-11 07:19:36.688704 | orchestrator | 2026-04-11 07:19:36.688716 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-11 07:19:36.688727 | orchestrator | Saturday 11 April 2026 07:19:36 +0000 (0:00:00.085) 0:01:15.956 ******** 2026-04-11 07:19:36.688738 | orchestrator | 2026-04-11 07:19:36.688751 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-11 07:19:36.688773 | orchestrator | Saturday 11 April 2026 07:19:36 +0000 (0:00:00.260) 0:01:16.217 ******** 2026-04-11 07:21:55.931918 | orchestrator | 2026-04-11 07:21:55.932038 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-11 07:21:55.932055 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-11 07:21:55.932068 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-11 07:21:55.932090 | orchestrator | Saturday 11 April 2026 07:19:36 +0000 (0:00:00.075) 0:01:16.292 ******** 2026-04-11 07:21:55.932102 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:21:55.932139 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:21:55.932151 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:21:55.932162 | orchestrator | 2026-04-11 07:21:55.932173 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-11 07:21:55.932184 | orchestrator | Saturday 11 April 2026 07:19:50 +0000 (0:00:13.918) 0:01:30.210 ******** 2026-04-11 07:21:55.932195 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:21:55.932206 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:21:55.932217 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:21:55.932227 | orchestrator | 2026-04-11 07:21:55.932238 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-11 07:21:55.932249 | orchestrator | Saturday 11 April 2026 07:20:02 +0000 (0:00:12.126) 0:01:42.337 ******** 2026-04-11 07:21:55.932260 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:21:55.932271 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:21:55.932281 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:21:55.932292 | orchestrator | 2026-04-11 07:21:55.932303 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-11 07:21:55.932375 | orchestrator | Saturday 11 April 2026 07:20:15 +0000 (0:00:12.283) 0:01:54.620 ******** 2026-04-11 07:21:55.932388 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:21:55.932399 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:21:55.932410 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:21:55.932421 | orchestrator | 2026-04-11 07:21:55.932434 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-11 07:21:55.932447 | orchestrator | Saturday 11 April 2026 07:21:17 +0000 (0:01:02.705) 0:02:57.325 ******** 2026-04-11 07:21:55.932459 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:21:55.932472 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:21:55.932484 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:21:55.932497 | orchestrator | 2026-04-11 07:21:55.932510 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-11 07:21:55.932523 | orchestrator | Saturday 11 April 2026 07:21:29 +0000 (0:00:12.199) 0:03:09.525 ******** 2026-04-11 07:21:55.932536 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:21:55.932549 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:21:55.932561 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:21:55.932573 | orchestrator | 2026-04-11 07:21:55.932586 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-11 07:21:55.932599 | orchestrator | Saturday 11 April 2026 07:21:47 +0000 (0:00:17.868) 0:03:27.394 ******** 2026-04-11 07:21:55.932612 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:21:55.932624 | orchestrator | 2026-04-11 07:21:55.932637 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:21:55.932651 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-11 07:21:55.932665 | orchestrator | testbed-node-1 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:21:55.932678 | orchestrator | testbed-node-2 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:21:55.932690 | orchestrator | 2026-04-11 07:21:55.932703 | orchestrator | 2026-04-11 07:21:55.932716 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:21:55.932744 | orchestrator | Saturday 11 April 2026 07:21:55 +0000 (0:00:07.797) 0:03:35.192 ******** 2026-04-11 07:21:55.932757 | orchestrator | =============================================================================== 2026-04-11 07:21:55.932770 | orchestrator | designate : Restart designate-producer container ----------------------- 62.71s 2026-04-11 07:21:55.932783 | orchestrator | designate : Restart designate-worker container ------------------------- 17.87s 2026-04-11 07:21:55.932794 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.29s 2026-04-11 07:21:55.932814 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.57s 2026-04-11 07:21:55.932825 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.92s 2026-04-11 07:21:55.932836 | orchestrator | designate : Restart designate-central container ------------------------ 12.28s 2026-04-11 07:21:55.932847 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.20s 2026-04-11 07:21:55.932857 | orchestrator | designate : Restart designate-api container ---------------------------- 12.13s 2026-04-11 07:21:55.932868 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.80s 2026-04-11 07:21:55.932879 | orchestrator | service-check-containers : designate | Check containers ----------------- 6.29s 2026-04-11 07:21:55.932890 | orchestrator | designate : Copying over config.json files for services ----------------- 6.04s 2026-04-11 07:21:55.932900 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.63s 2026-04-11 07:21:55.932911 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.83s 2026-04-11 07:21:55.932922 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.46s 2026-04-11 07:21:55.932951 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.92s 2026-04-11 07:21:55.932963 | orchestrator | designate : Copying over rndc.key --------------------------------------- 2.74s 2026-04-11 07:21:55.932974 | orchestrator | designate : Copying over named.conf ------------------------------------- 2.57s 2026-04-11 07:21:55.932985 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 1.77s 2026-04-11 07:21:55.932996 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 1.46s 2026-04-11 07:21:55.933007 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.39s 2026-04-11 07:21:56.115982 | orchestrator | + osism apply -a upgrade ceilometer 2026-04-11 07:21:57.391851 | orchestrator | 2026-04-11 07:21:57 | INFO  | Prepare task for execution of ceilometer. 2026-04-11 07:21:57.463028 | orchestrator | 2026-04-11 07:21:57 | INFO  | Task 664e11d4-1d0d-4211-b851-3954adff98da (ceilometer) was prepared for execution. 2026-04-11 07:21:57.463116 | orchestrator | 2026-04-11 07:21:57 | INFO  | It takes a moment until task 664e11d4-1d0d-4211-b851-3954adff98da (ceilometer) has been started and output is visible here. 2026-04-11 07:22:18.363028 | orchestrator | 2026-04-11 07:22:18.363143 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:22:18.363213 | orchestrator | 2026-04-11 07:22:18.363292 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:22:18.363306 | orchestrator | Saturday 11 April 2026 07:22:02 +0000 (0:00:01.730) 0:00:01.730 ******** 2026-04-11 07:22:18.363320 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:22:18.363337 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:22:18.363351 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:22:18.363365 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:22:18.363380 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:22:18.363396 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:22:18.363411 | orchestrator | 2026-04-11 07:22:18.363426 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:22:18.363441 | orchestrator | Saturday 11 April 2026 07:22:05 +0000 (0:00:02.482) 0:00:04.212 ******** 2026-04-11 07:22:18.363457 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-11 07:22:18.363473 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-11 07:22:18.363488 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-11 07:22:18.363504 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-11 07:22:18.363519 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-11 07:22:18.363535 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-11 07:22:18.363550 | orchestrator | 2026-04-11 07:22:18.363599 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-11 07:22:18.363617 | orchestrator | 2026-04-11 07:22:18.363634 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-11 07:22:18.363652 | orchestrator | Saturday 11 April 2026 07:22:07 +0000 (0:00:02.047) 0:00:06.260 ******** 2026-04-11 07:22:18.363671 | orchestrator | included: /ansible/roles/ceilometer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 07:22:18.363691 | orchestrator | 2026-04-11 07:22:18.363708 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-11 07:22:18.363725 | orchestrator | Saturday 11 April 2026 07:22:11 +0000 (0:00:04.642) 0:00:10.903 ******** 2026-04-11 07:22:18.363765 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:22:18.363787 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:22:18.363806 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:22:18.363849 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:22:18.363870 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:22:18.363901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:22:18.363918 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:22:18.363971 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:22:18.363991 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:22:18.364006 | orchestrator | 2026-04-11 07:22:18.364022 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-11 07:22:18.364036 | orchestrator | Saturday 11 April 2026 07:22:14 +0000 (0:00:03.219) 0:00:14.123 ******** 2026-04-11 07:22:18.364052 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:22:18.364067 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 07:22:18.364081 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 07:22:18.364090 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 07:22:18.364099 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 07:22:18.364108 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 07:22:18.364116 | orchestrator | 2026-04-11 07:22:18.364125 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-11 07:22:18.364135 | orchestrator | Saturday 11 April 2026 07:22:18 +0000 (0:00:03.153) 0:00:17.276 ******** 2026-04-11 07:22:18.364144 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:22:18.364163 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:22:25.933084 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:22:25.933256 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:22:25.933298 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:22:25.933311 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:22:25.933322 | orchestrator | 2026-04-11 07:22:25.933336 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-11 07:22:25.933348 | orchestrator | Saturday 11 April 2026 07:22:19 +0000 (0:00:01.762) 0:00:19.039 ******** 2026-04-11 07:22:25.933360 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:25.933373 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:25.933384 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:25.933396 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:25.933407 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:25.933418 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:25.933429 | orchestrator | 2026-04-11 07:22:25.933440 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-11 07:22:25.933453 | orchestrator | Saturday 11 April 2026 07:22:21 +0000 (0:00:01.961) 0:00:21.001 ******** 2026-04-11 07:22:25.933464 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:22:25.933475 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:22:25.933486 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:22:25.933497 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:22:25.933507 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:22:25.933518 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:22:25.933529 | orchestrator | 2026-04-11 07:22:25.933540 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-11 07:22:25.933551 | orchestrator | Saturday 11 April 2026 07:22:23 +0000 (0:00:01.746) 0:00:22.747 ******** 2026-04-11 07:22:25.933566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:25.933597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:25.933611 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:25.933625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:25.933639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:25.933680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:25.933694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:25.933706 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:25.933719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:25.933734 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:25.933747 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:25.933766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:25.933779 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:25.933792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:25.933810 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:25.933821 | orchestrator | 2026-04-11 07:22:25.933833 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-11 07:22:25.933844 | orchestrator | Saturday 11 April 2026 07:22:25 +0000 (0:00:02.082) 0:00:24.830 ******** 2026-04-11 07:22:25.933856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:25.933876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:40.727868 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:40.727974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:40.728003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:40.728012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:40.728021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:40.728076 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:40.728086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:40.728095 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:40.728102 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:40.728183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:40.728193 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:40.728200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:40.728208 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:40.728214 | orchestrator | 2026-04-11 07:22:40.728224 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-11 07:22:40.728232 | orchestrator | Saturday 11 April 2026 07:22:27 +0000 (0:00:02.122) 0:00:26.953 ******** 2026-04-11 07:22:40.728241 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:22:40.728249 | orchestrator | 2026-04-11 07:22:40.728256 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-11 07:22:40.728271 | orchestrator | Saturday 11 April 2026 07:22:29 +0000 (0:00:01.861) 0:00:28.815 ******** 2026-04-11 07:22:40.728278 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:22:40.728287 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:22:40.728294 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:22:40.728301 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:22:40.728308 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:22:40.728315 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:22:40.728329 | orchestrator | 2026-04-11 07:22:40.728337 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-11 07:22:40.728344 | orchestrator | Saturday 11 April 2026 07:22:31 +0000 (0:00:01.818) 0:00:30.633 ******** 2026-04-11 07:22:40.728351 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:22:40.728358 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:22:40.728365 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:22:40.728372 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:22:40.728380 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:22:40.728387 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:22:40.728394 | orchestrator | 2026-04-11 07:22:40.728401 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-11 07:22:40.728408 | orchestrator | Saturday 11 April 2026 07:22:33 +0000 (0:00:02.175) 0:00:32.808 ******** 2026-04-11 07:22:40.728417 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:40.728425 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:40.728433 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:40.728441 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:40.728449 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:40.728457 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:40.728464 | orchestrator | 2026-04-11 07:22:40.728472 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-11 07:22:40.728480 | orchestrator | Saturday 11 April 2026 07:22:35 +0000 (0:00:01.699) 0:00:34.508 ******** 2026-04-11 07:22:40.728488 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:40.728497 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:40.728505 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:40.728513 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:40.728521 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:40.728528 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:40.728535 | orchestrator | 2026-04-11 07:22:40.728544 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-11 07:22:40.728551 | orchestrator | Saturday 11 April 2026 07:22:37 +0000 (0:00:01.946) 0:00:36.454 ******** 2026-04-11 07:22:40.728559 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:22:40.728567 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 07:22:40.728575 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 07:22:40.728583 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 07:22:40.728591 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 07:22:40.728599 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 07:22:40.728607 | orchestrator | 2026-04-11 07:22:40.728615 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-11 07:22:40.728623 | orchestrator | Saturday 11 April 2026 07:22:40 +0000 (0:00:02.976) 0:00:39.431 ******** 2026-04-11 07:22:40.728631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:40.728647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:47.922684 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:47.922779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:47.922807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:47.922816 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:47.922824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:47.922832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:47.922839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:47.922848 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:47.922855 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:47.922875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:47.922900 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:47.922907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:47.922917 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:47.922923 | orchestrator | 2026-04-11 07:22:47.922929 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-11 07:22:47.922937 | orchestrator | Saturday 11 April 2026 07:22:42 +0000 (0:00:02.292) 0:00:41.723 ******** 2026-04-11 07:22:47.922944 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:47.922951 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:47.922958 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:47.922965 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:47.922971 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:47.922978 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:47.922985 | orchestrator | 2026-04-11 07:22:47.922992 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-11 07:22:47.922999 | orchestrator | Saturday 11 April 2026 07:22:44 +0000 (0:00:01.970) 0:00:43.693 ******** 2026-04-11 07:22:47.923006 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 07:22:47.923017 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:22:47.923023 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 07:22:47.923029 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 07:22:47.923036 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 07:22:47.923043 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 07:22:47.923050 | orchestrator | 2026-04-11 07:22:47.923057 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-11 07:22:47.923065 | orchestrator | Saturday 11 April 2026 07:22:47 +0000 (0:00:03.024) 0:00:46.718 ******** 2026-04-11 07:22:47.923073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:47.923080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:47.923094 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:47.923150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:47.923166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:59.012299 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:59.012473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:22:59.012510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:59.012531 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:59.012547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:59.012561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:59.012596 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:59.012608 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:59.012619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:22:59.012631 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:59.012642 | orchestrator | 2026-04-11 07:22:59.012654 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-11 07:22:59.012666 | orchestrator | Saturday 11 April 2026 07:22:50 +0000 (0:00:02.511) 0:00:49.230 ******** 2026-04-11 07:22:59.012676 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:59.012687 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:59.012698 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:59.012709 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:59.012719 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:59.012730 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:59.012741 | orchestrator | 2026-04-11 07:22:59.012752 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-11 07:22:59.012782 | orchestrator | Saturday 11 April 2026 07:22:51 +0000 (0:00:01.821) 0:00:51.051 ******** 2026-04-11 07:22:59.012793 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:59.012807 | orchestrator | 2026-04-11 07:22:59.012820 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-11 07:22:59.012833 | orchestrator | Saturday 11 April 2026 07:22:53 +0000 (0:00:01.105) 0:00:52.157 ******** 2026-04-11 07:22:59.012847 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:22:59.012860 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:22:59.012872 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:22:59.012885 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:22:59.012904 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:22:59.012917 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:22:59.012929 | orchestrator | 2026-04-11 07:22:59.012942 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-11 07:22:59.012954 | orchestrator | Saturday 11 April 2026 07:22:55 +0000 (0:00:01.996) 0:00:54.153 ******** 2026-04-11 07:22:59.012968 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 07:22:59.012982 | orchestrator | 2026-04-11 07:22:59.012995 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-11 07:22:59.013008 | orchestrator | Saturday 11 April 2026 07:22:57 +0000 (0:00:02.492) 0:00:56.645 ******** 2026-04-11 07:22:59.013022 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:22:59.013043 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:22:59.013101 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:22:59.013116 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:22:59.013140 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:01.910274 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:01.910390 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:01.910442 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:01.910459 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:01.910477 | orchestrator | 2026-04-11 07:23:01.910495 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-11 07:23:01.910513 | orchestrator | Saturday 11 April 2026 07:23:00 +0000 (0:00:03.288) 0:00:59.933 ******** 2026-04-11 07:23:01.910530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:01.910548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:01.910595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:01.910624 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:23:01.910642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:01.910657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:01.910673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:01.910687 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:23:01.910702 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:23:01.910718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:01.910735 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:23:01.910762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:07.324421 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:23:07.324551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:07.324596 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:23:07.324610 | orchestrator | 2026-04-11 07:23:07.324622 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-11 07:23:07.324663 | orchestrator | Saturday 11 April 2026 07:23:03 +0000 (0:00:02.476) 0:01:02.410 ******** 2026-04-11 07:23:07.324676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:07.324689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:07.324701 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:23:07.324713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:07.324725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:07.324763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:07.324784 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:23:07.324796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:07.324807 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:23:07.324819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:07.324830 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:23:07.324841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:07.324852 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:23:07.324864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:07.324875 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:23:07.324886 | orchestrator | 2026-04-11 07:23:07.324897 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-11 07:23:07.324908 | orchestrator | Saturday 11 April 2026 07:23:05 +0000 (0:00:02.727) 0:01:05.138 ******** 2026-04-11 07:23:07.324921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:07.324953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:12.488386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:12.488558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:12.488602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:12.489453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:12.489490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:12.489561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:12.489617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:12.489641 | orchestrator | 2026-04-11 07:23:12.489663 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-11 07:23:12.489684 | orchestrator | Saturday 11 April 2026 07:23:09 +0000 (0:00:03.345) 0:01:08.483 ******** 2026-04-11 07:23:12.489706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:12.489729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:12.489751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:12.489784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:12.489827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:31.520353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:31.520474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:31.520491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:31.520502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:31.520541 | orchestrator | 2026-04-11 07:23:31.520555 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-11 07:23:31.520568 | orchestrator | Saturday 11 April 2026 07:23:15 +0000 (0:00:06.522) 0:01:15.006 ******** 2026-04-11 07:23:31.520588 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 07:23:31.520608 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:23:31.520628 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 07:23:31.520644 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 07:23:31.520659 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 07:23:31.520687 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 07:23:31.520708 | orchestrator | 2026-04-11 07:23:31.520727 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-11 07:23:31.520745 | orchestrator | Saturday 11 April 2026 07:23:19 +0000 (0:00:03.397) 0:01:18.403 ******** 2026-04-11 07:23:31.520762 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:23:31.520781 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:23:31.520799 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:23:31.520817 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:23:31.520835 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:23:31.520854 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:23:31.520872 | orchestrator | 2026-04-11 07:23:31.520891 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-11 07:23:31.520910 | orchestrator | Saturday 11 April 2026 07:23:21 +0000 (0:00:02.006) 0:01:20.410 ******** 2026-04-11 07:23:31.520963 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:23:31.520983 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:23:31.521012 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:23:31.521024 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:23:31.521036 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:23:31.521047 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:23:31.521057 | orchestrator | 2026-04-11 07:23:31.521068 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-11 07:23:31.521079 | orchestrator | Saturday 11 April 2026 07:23:23 +0000 (0:00:02.570) 0:01:22.980 ******** 2026-04-11 07:23:31.521089 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:23:31.521100 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:23:31.521111 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:23:31.521122 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:23:31.521153 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:23:31.521164 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:23:31.521175 | orchestrator | 2026-04-11 07:23:31.521186 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-11 07:23:31.521197 | orchestrator | Saturday 11 April 2026 07:23:26 +0000 (0:00:02.607) 0:01:25.588 ******** 2026-04-11 07:23:31.521207 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:23:31.521218 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 07:23:31.521228 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 07:23:31.521239 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 07:23:31.521250 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 07:23:31.521260 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 07:23:31.521271 | orchestrator | 2026-04-11 07:23:31.521281 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-11 07:23:31.521292 | orchestrator | Saturday 11 April 2026 07:23:29 +0000 (0:00:03.311) 0:01:28.900 ******** 2026-04-11 07:23:31.521304 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:31.521329 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:31.521341 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:31.521353 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:31.521371 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:31.521390 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:34.495819 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:34.495977 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:34.495989 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:34.495997 | orchestrator | 2026-04-11 07:23:34.496006 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-11 07:23:34.496015 | orchestrator | Saturday 11 April 2026 07:23:33 +0000 (0:00:03.356) 0:01:32.256 ******** 2026-04-11 07:23:34.496022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:34.496044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:34.496051 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:23:34.496060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:34.496081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:34.496094 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:23:34.496101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:34.496109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:34.496116 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:23:34.496124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:34.496132 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:23:34.496143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:34.496151 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:23:34.496163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:41.566359 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:23:41.566494 | orchestrator | 2026-04-11 07:23:41.566514 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-11 07:23:41.566527 | orchestrator | Saturday 11 April 2026 07:23:35 +0000 (0:00:02.545) 0:01:34.801 ******** 2026-04-11 07:23:41.567350 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:23:41.567422 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:23:41.567431 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:23:41.567438 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:23:41.567445 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:23:41.567453 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:23:41.567460 | orchestrator | 2026-04-11 07:23:41.567468 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-11 07:23:41.567477 | orchestrator | Saturday 11 April 2026 07:23:37 +0000 (0:00:02.012) 0:01:36.814 ******** 2026-04-11 07:23:41.567487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:41.567498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:41.567507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:41.567533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:41.567542 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:23:41.567574 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:23:41.567583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:41.567615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:41.567624 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:23:41.567633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:41.567642 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:23:41.567650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:41.567658 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:23:41.567666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:41.567675 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:23:41.567683 | orchestrator | 2026-04-11 07:23:41.567691 | orchestrator | TASK [service-check-containers : ceilometer | Check containers] **************** 2026-04-11 07:23:41.567699 | orchestrator | Saturday 11 April 2026 07:23:40 +0000 (0:00:02.688) 0:01:39.503 ******** 2026-04-11 07:23:41.567719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:41.567735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:46.355077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:46.355174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-11 07:23:46.355187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:46.355198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:46.355243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:46.355254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:46.355281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-11 07:23:46.355292 | orchestrator | 2026-04-11 07:23:46.355302 | orchestrator | TASK [service-check-containers : ceilometer | Notify handlers to restart containers] *** 2026-04-11 07:23:46.355312 | orchestrator | Saturday 11 April 2026 07:23:43 +0000 (0:00:03.300) 0:01:42.803 ******** 2026-04-11 07:23:46.355322 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:23:46.355332 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:23:46.355341 | orchestrator | } 2026-04-11 07:23:46.355350 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:23:46.355359 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:23:46.355367 | orchestrator | } 2026-04-11 07:23:46.355376 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:23:46.355385 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:23:46.355394 | orchestrator | } 2026-04-11 07:23:46.355402 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 07:23:46.355411 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:23:46.355420 | orchestrator | } 2026-04-11 07:23:46.355428 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 07:23:46.355437 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:23:46.355446 | orchestrator | } 2026-04-11 07:23:46.355454 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 07:23:46.355463 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:23:46.355472 | orchestrator | } 2026-04-11 07:23:46.355481 | orchestrator | 2026-04-11 07:23:46.355489 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:23:46.355498 | orchestrator | Saturday 11 April 2026 07:23:45 +0000 (0:00:02.333) 0:01:45.136 ******** 2026-04-11 07:23:46.355509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:46.355529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:23:46.355538 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:23:46.355548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:23:46.355563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:24:43.638263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-11 07:24:43.638414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:24:43.638433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:24:43.638470 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:24:43.638485 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:24:43.638496 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:24:43.638522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:24:43.638534 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:24:43.638545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-11 07:24:43.638556 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:24:43.638567 | orchestrator | 2026-04-11 07:24:43.638579 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-11 07:24:43.638591 | orchestrator | Saturday 11 April 2026 07:23:48 +0000 (0:00:02.769) 0:01:47.906 ******** 2026-04-11 07:24:43.638602 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:24:43.638613 | orchestrator | 2026-04-11 07:24:43.638624 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 07:24:43.638635 | orchestrator | Saturday 11 April 2026 07:23:56 +0000 (0:00:08.196) 0:01:56.103 ******** 2026-04-11 07:24:43.638645 | orchestrator | 2026-04-11 07:24:43.638656 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 07:24:43.638713 | orchestrator | Saturday 11 April 2026 07:23:57 +0000 (0:00:00.472) 0:01:56.575 ******** 2026-04-11 07:24:43.638727 | orchestrator | 2026-04-11 07:24:43.638737 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 07:24:43.638748 | orchestrator | Saturday 11 April 2026 07:23:58 +0000 (0:00:00.645) 0:01:57.221 ******** 2026-04-11 07:24:43.638759 | orchestrator | 2026-04-11 07:24:43.638772 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 07:24:43.638785 | orchestrator | Saturday 11 April 2026 07:23:58 +0000 (0:00:00.431) 0:01:57.652 ******** 2026-04-11 07:24:43.638797 | orchestrator | 2026-04-11 07:24:43.638810 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 07:24:43.638823 | orchestrator | Saturday 11 April 2026 07:23:58 +0000 (0:00:00.446) 0:01:58.098 ******** 2026-04-11 07:24:43.638836 | orchestrator | 2026-04-11 07:24:43.638849 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-11 07:24:43.638861 | orchestrator | Saturday 11 April 2026 07:23:59 +0000 (0:00:00.471) 0:01:58.570 ******** 2026-04-11 07:24:43.638883 | orchestrator | 2026-04-11 07:24:43.638895 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-11 07:24:43.638908 | orchestrator | Saturday 11 April 2026 07:24:00 +0000 (0:00:00.780) 0:01:59.350 ******** 2026-04-11 07:24:43.638921 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:24:43.638934 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:24:43.638947 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:24:43.638960 | orchestrator | 2026-04-11 07:24:43.638972 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-11 07:24:43.638985 | orchestrator | Saturday 11 April 2026 07:24:17 +0000 (0:00:17.709) 0:02:17.059 ******** 2026-04-11 07:24:43.638998 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:24:43.639010 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:24:43.639023 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:24:43.639035 | orchestrator | 2026-04-11 07:24:43.639047 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-11 07:24:43.639060 | orchestrator | Saturday 11 April 2026 07:24:30 +0000 (0:00:12.482) 0:02:29.542 ******** 2026-04-11 07:24:43.639073 | orchestrator | changed: [testbed-node-3] 2026-04-11 07:24:43.639086 | orchestrator | changed: [testbed-node-4] 2026-04-11 07:24:43.639099 | orchestrator | changed: [testbed-node-5] 2026-04-11 07:24:43.639112 | orchestrator | 2026-04-11 07:24:43.639123 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:24:43.639135 | orchestrator | testbed-node-0 : ok=26  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-11 07:24:43.639148 | orchestrator | testbed-node-1 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 07:24:43.639159 | orchestrator | testbed-node-2 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 07:24:43.639169 | orchestrator | testbed-node-3 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-11 07:24:43.639180 | orchestrator | testbed-node-4 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-11 07:24:43.639191 | orchestrator | testbed-node-5 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-11 07:24:43.639202 | orchestrator | 2026-04-11 07:24:43.639213 | orchestrator | 2026-04-11 07:24:43.639230 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:24:43.639254 | orchestrator | Saturday 11 April 2026 07:24:43 +0000 (0:00:13.214) 0:02:42.756 ******** 2026-04-11 07:24:43.639289 | orchestrator | =============================================================================== 2026-04-11 07:24:43.639308 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 17.71s 2026-04-11 07:24:43.639327 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 13.21s 2026-04-11 07:24:43.639343 | orchestrator | ceilometer : Restart ceilometer-central container ---------------------- 12.48s 2026-04-11 07:24:43.639359 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 8.20s 2026-04-11 07:24:43.639376 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 6.52s 2026-04-11 07:24:43.639394 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 4.64s 2026-04-11 07:24:43.639412 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 3.40s 2026-04-11 07:24:43.639429 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 3.36s 2026-04-11 07:24:43.639447 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 3.35s 2026-04-11 07:24:43.639465 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 3.31s 2026-04-11 07:24:43.639495 | orchestrator | service-check-containers : ceilometer | Check containers ---------------- 3.30s 2026-04-11 07:24:43.639514 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 3.29s 2026-04-11 07:24:43.639532 | orchestrator | ceilometer : Flush handlers --------------------------------------------- 3.25s 2026-04-11 07:24:43.639551 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 3.22s 2026-04-11 07:24:43.639569 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 3.15s 2026-04-11 07:24:43.639587 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 3.02s 2026-04-11 07:24:43.639619 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 2.98s 2026-04-11 07:24:44.016857 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.77s 2026-04-11 07:24:44.016972 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 2.73s 2026-04-11 07:24:44.016991 | orchestrator | ceilometer : Copying over existing policy file -------------------------- 2.69s 2026-04-11 07:24:44.215882 | orchestrator | + osism apply -a upgrade aodh 2026-04-11 07:24:45.533570 | orchestrator | 2026-04-11 07:24:45 | INFO  | Prepare task for execution of aodh. 2026-04-11 07:24:45.612575 | orchestrator | 2026-04-11 07:24:45 | INFO  | Task ba53c034-da0c-41dd-b484-b14d34ef133f (aodh) was prepared for execution. 2026-04-11 07:24:45.612730 | orchestrator | 2026-04-11 07:24:45 | INFO  | It takes a moment until task ba53c034-da0c-41dd-b484-b14d34ef133f (aodh) has been started and output is visible here. 2026-04-11 07:25:00.457218 | orchestrator | 2026-04-11 07:25:00.457352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:25:00.457372 | orchestrator | 2026-04-11 07:25:00.457386 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:25:00.457433 | orchestrator | Saturday 11 April 2026 07:24:50 +0000 (0:00:01.634) 0:00:01.634 ******** 2026-04-11 07:25:00.457449 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:25:00.457465 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:25:00.457479 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:25:00.457494 | orchestrator | 2026-04-11 07:25:00.457508 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:25:00.457523 | orchestrator | Saturday 11 April 2026 07:24:52 +0000 (0:00:01.727) 0:00:03.361 ******** 2026-04-11 07:25:00.457539 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-11 07:25:00.457555 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-11 07:25:00.457571 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-11 07:25:00.457586 | orchestrator | 2026-04-11 07:25:00.457601 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-11 07:25:00.457616 | orchestrator | 2026-04-11 07:25:00.457656 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-11 07:25:00.457666 | orchestrator | Saturday 11 April 2026 07:24:54 +0000 (0:00:01.882) 0:00:05.244 ******** 2026-04-11 07:25:00.457675 | orchestrator | included: /ansible/roles/aodh/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:25:00.457685 | orchestrator | 2026-04-11 07:25:00.457694 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-11 07:25:00.457703 | orchestrator | Saturday 11 April 2026 07:24:58 +0000 (0:00:04.051) 0:00:09.296 ******** 2026-04-11 07:25:00.457730 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:00.457767 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:00.457799 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:00.457812 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:00.457824 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:00.457835 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:00.457857 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:00.457868 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:00.457879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:00.457898 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:05.242216 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:05.242318 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:05.242358 | orchestrator | 2026-04-11 07:25:05.242371 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-11 07:25:05.242383 | orchestrator | Saturday 11 April 2026 07:25:01 +0000 (0:00:03.620) 0:00:12.916 ******** 2026-04-11 07:25:05.242393 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:25:05.242404 | orchestrator | 2026-04-11 07:25:05.242414 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-11 07:25:05.242424 | orchestrator | Saturday 11 April 2026 07:25:03 +0000 (0:00:01.122) 0:00:14.039 ******** 2026-04-11 07:25:05.242434 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:25:05.242444 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:25:05.242454 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:25:05.242464 | orchestrator | 2026-04-11 07:25:05.242473 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-11 07:25:05.242510 | orchestrator | Saturday 11 April 2026 07:25:04 +0000 (0:00:01.427) 0:00:15.467 ******** 2026-04-11 07:25:05.242521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:05.242536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:05.242547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:05.242575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:05.242586 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:25:05.242596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:05.242664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:05.242677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:05.242687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:05.242697 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:25:05.242716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:10.849199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:10.849323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:10.849354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:10.849368 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:25:10.849382 | orchestrator | 2026-04-11 07:25:10.849395 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-11 07:25:10.849407 | orchestrator | Saturday 11 April 2026 07:25:06 +0000 (0:00:01.882) 0:00:17.349 ******** 2026-04-11 07:25:10.849418 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:25:10.849430 | orchestrator | 2026-04-11 07:25:10.849441 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-11 07:25:10.849452 | orchestrator | Saturday 11 April 2026 07:25:08 +0000 (0:00:01.705) 0:00:19.055 ******** 2026-04-11 07:25:10.849464 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:10.849496 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:10.849517 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:10.849535 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:10.849551 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:10.849571 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:10.849656 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:10.849691 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:13.934837 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:13.934954 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:13.934988 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:13.935015 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:13.935029 | orchestrator | 2026-04-11 07:25:13.935042 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-11 07:25:13.935055 | orchestrator | Saturday 11 April 2026 07:25:13 +0000 (0:00:04.987) 0:00:24.043 ******** 2026-04-11 07:25:13.935068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:13.935127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:13.935142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:13.935162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:13.935174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:13.935186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:13.935205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:13.935217 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:25:13.935239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:16.071868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:16.071995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:16.072013 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:25:16.072029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:16.072041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:16.072073 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:25:16.072085 | orchestrator | 2026-04-11 07:25:16.072098 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-11 07:25:16.072110 | orchestrator | Saturday 11 April 2026 07:25:15 +0000 (0:00:02.327) 0:00:26.371 ******** 2026-04-11 07:25:16.072123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:16.072160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:16.072179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:16.072192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:16.072205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:16.072226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:16.072239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:16.072251 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:25:16.072272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:20.872298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:20.872407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:20.872424 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:25:20.872438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:20.872472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:20.872485 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:25:20.872497 | orchestrator | 2026-04-11 07:25:20.872509 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-11 07:25:20.872521 | orchestrator | Saturday 11 April 2026 07:25:17 +0000 (0:00:02.035) 0:00:28.406 ******** 2026-04-11 07:25:20.872533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:20.872698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:20.872724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:20.872747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:20.872759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:20.872771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:20.872782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:20.872807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492342 | orchestrator | 2026-04-11 07:25:29.492356 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-11 07:25:29.492369 | orchestrator | Saturday 11 April 2026 07:25:22 +0000 (0:00:05.594) 0:00:34.000 ******** 2026-04-11 07:25:29.492381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:29.492435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:29.492471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:29.492490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:29.492660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:38.651457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:38.651661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:38.651679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:38.651692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:38.651705 | orchestrator | 2026-04-11 07:25:38.651718 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-11 07:25:38.651731 | orchestrator | Saturday 11 April 2026 07:25:32 +0000 (0:00:09.591) 0:00:43.592 ******** 2026-04-11 07:25:38.651743 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:25:38.651755 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:25:38.651766 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:25:38.651777 | orchestrator | 2026-04-11 07:25:38.651789 | orchestrator | TASK [service-check-containers : aodh | Check containers] ********************** 2026-04-11 07:25:38.651800 | orchestrator | Saturday 11 April 2026 07:25:35 +0000 (0:00:03.215) 0:00:46.807 ******** 2026-04-11 07:25:38.651834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:38.651899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:38.651914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:25:38.651928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:38.651941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:38.651959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-11 07:25:38.651988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:42.872119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:42.872303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:42.872338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:42.872361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:42.872382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-11 07:25:42.872444 | orchestrator | 2026-04-11 07:25:42.872467 | orchestrator | TASK [service-check-containers : aodh | Notify handlers to restart containers] *** 2026-04-11 07:25:42.872489 | orchestrator | Saturday 11 April 2026 07:25:40 +0000 (0:00:05.000) 0:00:51.808 ******** 2026-04-11 07:25:42.872546 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:25:42.872568 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:25:42.872591 | orchestrator | } 2026-04-11 07:25:42.872611 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:25:42.872655 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:25:42.872678 | orchestrator | } 2026-04-11 07:25:42.872701 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:25:42.872722 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:25:42.872742 | orchestrator | } 2026-04-11 07:25:42.872764 | orchestrator | 2026-04-11 07:25:42.872785 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:25:42.872806 | orchestrator | Saturday 11 April 2026 07:25:42 +0000 (0:00:01.610) 0:00:53.418 ******** 2026-04-11 07:25:42.872868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:42.872899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:42.872922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:25:42.872944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:25:42.872964 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:25:42.873015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:25:42.873041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:25:42.873077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:26:59.867353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:26:59.867480 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:26:59.867501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:26:59.867520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-11 07:26:59.867570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-11 07:26:59.867584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-11 07:26:59.867596 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:26:59.867607 | orchestrator | 2026-04-11 07:26:59.867619 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-11 07:26:59.867632 | orchestrator | Saturday 11 April 2026 07:25:44 +0000 (0:00:02.103) 0:00:55.521 ******** 2026-04-11 07:26:59.867643 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:26:59.867654 | orchestrator | 2026-04-11 07:26:59.867665 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-11 07:26:59.867676 | orchestrator | Saturday 11 April 2026 07:26:00 +0000 (0:00:15.752) 0:01:11.275 ******** 2026-04-11 07:26:59.867687 | orchestrator | 2026-04-11 07:26:59.867697 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-11 07:26:59.867708 | orchestrator | Saturday 11 April 2026 07:26:00 +0000 (0:00:00.467) 0:01:11.742 ******** 2026-04-11 07:26:59.867719 | orchestrator | 2026-04-11 07:26:59.867749 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-11 07:26:59.867761 | orchestrator | Saturday 11 April 2026 07:26:01 +0000 (0:00:00.452) 0:01:12.194 ******** 2026-04-11 07:26:59.867772 | orchestrator | 2026-04-11 07:26:59.867782 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-11 07:26:59.867793 | orchestrator | Saturday 11 April 2026 07:26:02 +0000 (0:00:00.991) 0:01:13.186 ******** 2026-04-11 07:26:59.867804 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:26:59.867818 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:26:59.867837 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:26:59.867856 | orchestrator | 2026-04-11 07:26:59.867873 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-11 07:26:59.867891 | orchestrator | Saturday 11 April 2026 07:26:15 +0000 (0:00:13.201) 0:01:26.387 ******** 2026-04-11 07:26:59.867912 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:26:59.867931 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:26:59.867952 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:26:59.867971 | orchestrator | 2026-04-11 07:26:59.867990 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-11 07:26:59.868003 | orchestrator | Saturday 11 April 2026 07:26:28 +0000 (0:00:12.904) 0:01:39.292 ******** 2026-04-11 07:26:59.868025 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:26:59.868037 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:26:59.868049 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:26:59.868061 | orchestrator | 2026-04-11 07:26:59.868074 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-11 07:26:59.868087 | orchestrator | Saturday 11 April 2026 07:26:41 +0000 (0:00:12.924) 0:01:52.217 ******** 2026-04-11 07:26:59.868099 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:26:59.868111 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:26:59.868123 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:26:59.868135 | orchestrator | 2026-04-11 07:26:59.868148 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:26:59.868161 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:26:59.868175 | orchestrator | testbed-node-1 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 07:26:59.868187 | orchestrator | testbed-node-2 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 07:26:59.868198 | orchestrator | 2026-04-11 07:26:59.868208 | orchestrator | 2026-04-11 07:26:59.868219 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:26:59.868230 | orchestrator | Saturday 11 April 2026 07:26:59 +0000 (0:00:18.327) 0:02:10.545 ******** 2026-04-11 07:26:59.868241 | orchestrator | =============================================================================== 2026-04-11 07:26:59.868251 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 18.33s 2026-04-11 07:26:59.868262 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 15.75s 2026-04-11 07:26:59.868296 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 13.20s 2026-04-11 07:26:59.868309 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 12.93s 2026-04-11 07:26:59.868320 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 12.90s 2026-04-11 07:26:59.868331 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.59s 2026-04-11 07:26:59.868342 | orchestrator | aodh : Copying over config.json files for services ---------------------- 5.60s 2026-04-11 07:26:59.868352 | orchestrator | service-check-containers : aodh | Check containers ---------------------- 5.00s 2026-04-11 07:26:59.868363 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.99s 2026-04-11 07:26:59.868373 | orchestrator | aodh : include_tasks ---------------------------------------------------- 4.05s 2026-04-11 07:26:59.868391 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 3.62s 2026-04-11 07:26:59.868402 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 3.22s 2026-04-11 07:26:59.868413 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS certificate --- 2.33s 2026-04-11 07:26:59.868425 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.10s 2026-04-11 07:26:59.868435 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 2.03s 2026-04-11 07:26:59.868446 | orchestrator | aodh : Flush handlers --------------------------------------------------- 1.91s 2026-04-11 07:26:59.868457 | orchestrator | aodh : Copying over existing policy file -------------------------------- 1.88s 2026-04-11 07:26:59.868467 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.88s 2026-04-11 07:26:59.868478 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.73s 2026-04-11 07:26:59.868488 | orchestrator | aodh : include_tasks ---------------------------------------------------- 1.71s 2026-04-11 07:27:00.059811 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-11 07:27:00.116803 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 07:27:00.116944 | orchestrator | + osism apply -a bootstrap octavia 2026-04-11 07:27:01.451683 | orchestrator | 2026-04-11 07:27:01 | INFO  | Prepare task for execution of octavia. 2026-04-11 07:27:01.518211 | orchestrator | 2026-04-11 07:27:01 | INFO  | Task e3a8bb67-8529-4b78-a2f7-00011574bc3f (octavia) was prepared for execution. 2026-04-11 07:27:01.518351 | orchestrator | 2026-04-11 07:27:01 | INFO  | It takes a moment until task e3a8bb67-8529-4b78-a2f7-00011574bc3f (octavia) has been started and output is visible here. 2026-04-11 07:27:47.825343 | orchestrator | 2026-04-11 07:27:47.825461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:27:47.825478 | orchestrator | 2026-04-11 07:27:47.825490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:27:47.825503 | orchestrator | Saturday 11 April 2026 07:27:07 +0000 (0:00:02.178) 0:00:02.178 ******** 2026-04-11 07:27:47.825514 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:27:47.825526 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:27:47.825537 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:27:47.825548 | orchestrator | 2026-04-11 07:27:47.825559 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:27:47.825570 | orchestrator | Saturday 11 April 2026 07:27:09 +0000 (0:00:02.307) 0:00:04.486 ******** 2026-04-11 07:27:47.825581 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-11 07:27:47.825592 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-11 07:27:47.825603 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-11 07:27:47.825614 | orchestrator | 2026-04-11 07:27:47.825625 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-11 07:27:47.825636 | orchestrator | 2026-04-11 07:27:47.825647 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 07:27:47.825657 | orchestrator | Saturday 11 April 2026 07:27:11 +0000 (0:00:01.700) 0:00:06.186 ******** 2026-04-11 07:27:47.825669 | orchestrator | included: /ansible/roles/octavia/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:27:47.825680 | orchestrator | 2026-04-11 07:27:47.825691 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-11 07:27:47.825702 | orchestrator | Saturday 11 April 2026 07:27:13 +0000 (0:00:02.108) 0:00:08.295 ******** 2026-04-11 07:27:47.825713 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:27:47.825723 | orchestrator | 2026-04-11 07:27:47.825734 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-11 07:27:47.825745 | orchestrator | Saturday 11 April 2026 07:27:16 +0000 (0:00:03.604) 0:00:11.899 ******** 2026-04-11 07:27:47.825756 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:27:47.825766 | orchestrator | 2026-04-11 07:27:47.825777 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-11 07:27:47.825788 | orchestrator | Saturday 11 April 2026 07:27:19 +0000 (0:00:02.984) 0:00:14.884 ******** 2026-04-11 07:27:47.825799 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:27:47.825810 | orchestrator | 2026-04-11 07:27:47.825823 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-11 07:27:47.825836 | orchestrator | Saturday 11 April 2026 07:27:22 +0000 (0:00:03.136) 0:00:18.021 ******** 2026-04-11 07:27:47.825849 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:27:47.825861 | orchestrator | 2026-04-11 07:27:47.825873 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-11 07:27:47.825885 | orchestrator | Saturday 11 April 2026 07:27:26 +0000 (0:00:03.544) 0:00:21.565 ******** 2026-04-11 07:27:47.825898 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:27:47.825912 | orchestrator | 2026-04-11 07:27:47.825924 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:27:47.825937 | orchestrator | testbed-node-0 : ok=8  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 07:27:47.825976 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 07:27:47.825991 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 07:27:47.826003 | orchestrator | 2026-04-11 07:27:47.826075 | orchestrator | 2026-04-11 07:27:47.826090 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:27:47.826103 | orchestrator | Saturday 11 April 2026 07:27:47 +0000 (0:00:20.986) 0:00:42.552 ******** 2026-04-11 07:27:47.826115 | orchestrator | =============================================================================== 2026-04-11 07:27:47.826142 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.99s 2026-04-11 07:27:47.826195 | orchestrator | octavia : Creating Octavia database ------------------------------------- 3.60s 2026-04-11 07:27:47.826207 | orchestrator | octavia : Creating Octavia persistence database user and setting permissions --- 3.54s 2026-04-11 07:27:47.826218 | orchestrator | octavia : Creating Octavia database user and setting permissions -------- 3.14s 2026-04-11 07:27:47.826229 | orchestrator | octavia : Creating Octavia persistence database ------------------------- 2.98s 2026-04-11 07:27:47.826240 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.31s 2026-04-11 07:27:47.826251 | orchestrator | octavia : include_tasks ------------------------------------------------- 2.11s 2026-04-11 07:27:47.826261 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.70s 2026-04-11 07:27:48.014969 | orchestrator | + osism apply -a upgrade octavia 2026-04-11 07:27:49.342629 | orchestrator | 2026-04-11 07:27:49 | INFO  | Prepare task for execution of octavia. 2026-04-11 07:27:49.408851 | orchestrator | 2026-04-11 07:27:49 | INFO  | Task eaf41282-74f2-4d2c-b053-e13b8c3120b9 (octavia) was prepared for execution. 2026-04-11 07:27:49.408927 | orchestrator | 2026-04-11 07:27:49 | INFO  | It takes a moment until task eaf41282-74f2-4d2c-b053-e13b8c3120b9 (octavia) has been started and output is visible here. 2026-04-11 07:28:28.884891 | orchestrator | 2026-04-11 07:28:28.885098 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:28:28.885119 | orchestrator | 2026-04-11 07:28:28.885131 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:28:28.885142 | orchestrator | Saturday 11 April 2026 07:27:54 +0000 (0:00:01.596) 0:00:01.596 ******** 2026-04-11 07:28:28.885154 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:28:28.885166 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:28:28.885177 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:28:28.885188 | orchestrator | 2026-04-11 07:28:28.885199 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:28:28.885210 | orchestrator | Saturday 11 April 2026 07:27:56 +0000 (0:00:01.744) 0:00:03.340 ******** 2026-04-11 07:28:28.885221 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-11 07:28:28.885234 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-11 07:28:28.885244 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-11 07:28:28.885255 | orchestrator | 2026-04-11 07:28:28.885266 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-11 07:28:28.885277 | orchestrator | 2026-04-11 07:28:28.885288 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 07:28:28.885298 | orchestrator | Saturday 11 April 2026 07:27:58 +0000 (0:00:02.239) 0:00:05.580 ******** 2026-04-11 07:28:28.885310 | orchestrator | included: /ansible/roles/octavia/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:28:28.885326 | orchestrator | 2026-04-11 07:28:28.885345 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 07:28:28.885363 | orchestrator | Saturday 11 April 2026 07:28:01 +0000 (0:00:03.270) 0:00:08.850 ******** 2026-04-11 07:28:28.885382 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:28:28.885441 | orchestrator | 2026-04-11 07:28:28.885462 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-11 07:28:28.885484 | orchestrator | Saturday 11 April 2026 07:28:03 +0000 (0:00:01.729) 0:00:10.580 ******** 2026-04-11 07:28:28.885503 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:28:28.885523 | orchestrator | 2026-04-11 07:28:28.885539 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-11 07:28:28.885552 | orchestrator | Saturday 11 April 2026 07:28:08 +0000 (0:00:05.332) 0:00:15.913 ******** 2026-04-11 07:28:28.885565 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:28:28.885577 | orchestrator | 2026-04-11 07:28:28.885590 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-11 07:28:28.885603 | orchestrator | Saturday 11 April 2026 07:28:12 +0000 (0:00:04.131) 0:00:20.045 ******** 2026-04-11 07:28:28.885616 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-11 07:28:28.885630 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-11 07:28:28.885643 | orchestrator | 2026-04-11 07:28:28.885655 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-11 07:28:28.885668 | orchestrator | Saturday 11 April 2026 07:28:21 +0000 (0:00:08.266) 0:00:28.311 ******** 2026-04-11 07:28:28.885680 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:28:28.885693 | orchestrator | 2026-04-11 07:28:28.885705 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-11 07:28:28.885717 | orchestrator | Saturday 11 April 2026 07:28:25 +0000 (0:00:04.565) 0:00:32.877 ******** 2026-04-11 07:28:28.885730 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:28:28.885742 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:28:28.885755 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:28:28.885768 | orchestrator | 2026-04-11 07:28:28.885779 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-11 07:28:28.885790 | orchestrator | Saturday 11 April 2026 07:28:27 +0000 (0:00:01.384) 0:00:34.261 ******** 2026-04-11 07:28:28.885824 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:28.885864 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:28:28.885879 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:28.885902 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:28.885914 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:28.885933 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:28:28.885945 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:28:28.885969 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:33.769103 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:33.769216 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:33.769232 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:28:33.769246 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:33.769276 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:33.769288 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:28:33.769321 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:28:33.769358 | orchestrator | 2026-04-11 07:28:33.769373 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-11 07:28:33.769395 | orchestrator | Saturday 11 April 2026 07:28:30 +0000 (0:00:03.788) 0:00:38.049 ******** 2026-04-11 07:28:33.769415 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:28:33.769434 | orchestrator | 2026-04-11 07:28:33.769451 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-11 07:28:33.769468 | orchestrator | Saturday 11 April 2026 07:28:31 +0000 (0:00:01.125) 0:00:39.175 ******** 2026-04-11 07:28:33.769485 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:28:33.769503 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:28:33.769521 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:28:33.769541 | orchestrator | 2026-04-11 07:28:33.769562 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-11 07:28:33.769581 | orchestrator | Saturday 11 April 2026 07:28:33 +0000 (0:00:01.436) 0:00:40.611 ******** 2026-04-11 07:28:33.769603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:33.769621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:33.769643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:33.769657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:33.769689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:38.406524 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:28:38.406636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:38.406658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:38.406672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:38.406702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:38.406737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:38.406749 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:28:38.406779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:38.406792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:38.406804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:38.406816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:38.406833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:38.406852 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:28:38.406864 | orchestrator | 2026-04-11 07:28:38.406876 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-11 07:28:38.406888 | orchestrator | Saturday 11 April 2026 07:28:35 +0000 (0:00:01.843) 0:00:42.454 ******** 2026-04-11 07:28:38.406900 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:28:38.406911 | orchestrator | 2026-04-11 07:28:38.406923 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-11 07:28:38.406934 | orchestrator | Saturday 11 April 2026 07:28:36 +0000 (0:00:01.687) 0:00:44.141 ******** 2026-04-11 07:28:38.406954 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:41.750395 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:41.750516 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:41.750554 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:28:41.750584 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:28:41.750593 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:28:41.750620 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:41.750630 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:41.750639 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:41.750647 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:41.750666 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:41.750675 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:28:41.750684 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:28:41.750699 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:28:43.491893 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:28:43.491992 | orchestrator | 2026-04-11 07:28:43.492009 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-11 07:28:43.492093 | orchestrator | Saturday 11 April 2026 07:28:42 +0000 (0:00:05.968) 0:00:50.109 ******** 2026-04-11 07:28:43.492126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:43.492161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:43.492170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:43.492179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:43.492201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:43.492208 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:28:43.492217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:43.492233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:43.492240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:43.492246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:43.492253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:43.492259 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:28:43.492272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:45.116962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:45.117205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:45.117232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:45.117245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:45.117257 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:28:45.117271 | orchestrator | 2026-04-11 07:28:45.117281 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-11 07:28:45.117289 | orchestrator | Saturday 11 April 2026 07:28:44 +0000 (0:00:01.749) 0:00:51.859 ******** 2026-04-11 07:28:45.117296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:45.117323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:45.117339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:45.117350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:45.117357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:45.117363 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:28:45.117370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:45.117378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:45.117391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:48.824437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:48.824539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:48.824551 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:28:48.824561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:28:48.824572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:28:48.824581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:28:48.824616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:28:48.824624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:28:48.824630 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:28:48.824637 | orchestrator | 2026-04-11 07:28:48.824644 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-11 07:28:48.824653 | orchestrator | Saturday 11 April 2026 07:28:46 +0000 (0:00:01.699) 0:00:53.559 ******** 2026-04-11 07:28:48.824663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:48.824671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:48.824693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:28:48.824721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:00.385100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:00.385232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:00.385251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:00.385412 | orchestrator | 2026-04-11 07:29:00.385425 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-11 07:29:00.385438 | orchestrator | Saturday 11 April 2026 07:28:53 +0000 (0:00:07.316) 0:01:00.876 ******** 2026-04-11 07:29:00.385449 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-11 07:29:00.385460 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-11 07:29:00.385471 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-11 07:29:00.385482 | orchestrator | 2026-04-11 07:29:00.385493 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-11 07:29:00.385504 | orchestrator | Saturday 11 April 2026 07:28:56 +0000 (0:00:02.768) 0:01:03.645 ******** 2026-04-11 07:29:00.385523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:29:13.936832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:29:13.936925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:29:13.936998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:13.937007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:13.937014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:13.937034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:13.937048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:13.937056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:13.937062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:13.937075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:13.937081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:13.937088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:13.937103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:39.252982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:39.253093 | orchestrator | 2026-04-11 07:29:39.253110 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-11 07:29:39.253122 | orchestrator | Saturday 11 April 2026 07:29:15 +0000 (0:00:18.783) 0:01:22.428 ******** 2026-04-11 07:29:39.253133 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:29:39.253143 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:29:39.253153 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:29:39.253185 | orchestrator | 2026-04-11 07:29:39.253196 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-11 07:29:39.253205 | orchestrator | Saturday 11 April 2026 07:29:17 +0000 (0:00:02.766) 0:01:25.195 ******** 2026-04-11 07:29:39.253216 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253226 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253235 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253245 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253255 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253265 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253274 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253284 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253293 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253303 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253312 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253322 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253331 | orchestrator | 2026-04-11 07:29:39.253341 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-11 07:29:39.253351 | orchestrator | Saturday 11 April 2026 07:29:23 +0000 (0:00:05.990) 0:01:31.185 ******** 2026-04-11 07:29:39.253360 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253370 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253379 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253389 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253398 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253408 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253417 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253427 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253436 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253446 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253455 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253465 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253474 | orchestrator | 2026-04-11 07:29:39.253486 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-11 07:29:39.253497 | orchestrator | Saturday 11 April 2026 07:29:30 +0000 (0:00:06.298) 0:01:37.485 ******** 2026-04-11 07:29:39.253508 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253520 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253531 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-11 07:29:39.253542 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253553 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253564 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-11 07:29:39.253575 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253586 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253597 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-11 07:29:39.253608 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253618 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253629 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-11 07:29:39.253640 | orchestrator | 2026-04-11 07:29:39.253651 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-11 07:29:39.253672 | orchestrator | Saturday 11 April 2026 07:29:36 +0000 (0:00:06.514) 0:01:43.999 ******** 2026-04-11 07:29:39.253714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:29:39.253731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:29:39.253744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-11 07:29:39.253757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:39.253770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:39.253801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-11 07:29:44.737644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-11 07:29:44.737981 | orchestrator | 2026-04-11 07:29:44.737994 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-11 07:29:44.738007 | orchestrator | Saturday 11 April 2026 07:29:42 +0000 (0:00:06.140) 0:01:50.139 ******** 2026-04-11 07:29:44.738076 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:29:44.738093 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:29:44.738105 | orchestrator | } 2026-04-11 07:29:44.738116 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:29:44.738128 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:29:44.738139 | orchestrator | } 2026-04-11 07:29:44.738186 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:29:44.738200 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:29:44.738213 | orchestrator | } 2026-04-11 07:29:44.738225 | orchestrator | 2026-04-11 07:29:44.738238 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:29:44.738251 | orchestrator | Saturday 11 April 2026 07:29:44 +0000 (0:00:01.465) 0:01:51.605 ******** 2026-04-11 07:29:44.738266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:29:44.738300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:29:44.738328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:29:45.035736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:29:45.035833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:29:45.035848 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:29:45.035863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:29:45.035959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:29:45.035990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:29:45.036021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:29:45.036034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:29:45.036045 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:29:45.036057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-11 07:29:45.036069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-11 07:29:45.036088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-11 07:29:45.036105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-11 07:29:45.036123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-11 07:31:20.129935 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:31:20.130116 | orchestrator | 2026-04-11 07:31:20.130136 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-11 07:31:20.130149 | orchestrator | Saturday 11 April 2026 07:29:46 +0000 (0:00:02.334) 0:01:53.939 ******** 2026-04-11 07:31:20.130160 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:31:20.130172 | orchestrator | 2026-04-11 07:31:20.130183 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-11 07:31:20.130194 | orchestrator | Saturday 11 April 2026 07:29:59 +0000 (0:00:13.011) 0:02:06.951 ******** 2026-04-11 07:31:20.130205 | orchestrator | 2026-04-11 07:31:20.130216 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-11 07:31:20.130227 | orchestrator | Saturday 11 April 2026 07:30:00 +0000 (0:00:00.436) 0:02:07.388 ******** 2026-04-11 07:31:20.130238 | orchestrator | 2026-04-11 07:31:20.130249 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-11 07:31:20.130260 | orchestrator | Saturday 11 April 2026 07:30:00 +0000 (0:00:00.453) 0:02:07.841 ******** 2026-04-11 07:31:20.130270 | orchestrator | 2026-04-11 07:31:20.130281 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-11 07:31:20.130292 | orchestrator | Saturday 11 April 2026 07:30:01 +0000 (0:00:00.784) 0:02:08.626 ******** 2026-04-11 07:31:20.130303 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:31:20.130314 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:31:20.130325 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:31:20.130336 | orchestrator | 2026-04-11 07:31:20.130371 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-11 07:31:20.130382 | orchestrator | Saturday 11 April 2026 07:30:20 +0000 (0:00:18.996) 0:02:27.622 ******** 2026-04-11 07:31:20.130393 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:31:20.130404 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:31:20.130415 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:31:20.130426 | orchestrator | 2026-04-11 07:31:20.130437 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-11 07:31:20.130448 | orchestrator | Saturday 11 April 2026 07:30:34 +0000 (0:00:14.540) 0:02:42.162 ******** 2026-04-11 07:31:20.130460 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:31:20.130473 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:31:20.130486 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:31:20.130498 | orchestrator | 2026-04-11 07:31:20.130511 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-11 07:31:20.130524 | orchestrator | Saturday 11 April 2026 07:30:47 +0000 (0:00:13.017) 0:02:55.180 ******** 2026-04-11 07:31:20.130537 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:31:20.130549 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:31:20.130561 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:31:20.130574 | orchestrator | 2026-04-11 07:31:20.130587 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-11 07:31:20.130599 | orchestrator | Saturday 11 April 2026 07:31:01 +0000 (0:00:13.369) 0:03:08.550 ******** 2026-04-11 07:31:20.130612 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:31:20.130624 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:31:20.130637 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:31:20.130649 | orchestrator | 2026-04-11 07:31:20.130662 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:31:20.130674 | orchestrator | testbed-node-0 : ok=27  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:31:20.130691 | orchestrator | testbed-node-1 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 07:31:20.130743 | orchestrator | testbed-node-2 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 07:31:20.130770 | orchestrator | 2026-04-11 07:31:20.130789 | orchestrator | 2026-04-11 07:31:20.130807 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:31:20.130825 | orchestrator | Saturday 11 April 2026 07:31:19 +0000 (0:00:18.383) 0:03:26.934 ******** 2026-04-11 07:31:20.130841 | orchestrator | =============================================================================== 2026-04-11 07:31:20.130857 | orchestrator | octavia : Restart octavia-api container -------------------------------- 19.00s 2026-04-11 07:31:20.130873 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.78s 2026-04-11 07:31:20.130889 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 18.38s 2026-04-11 07:31:20.130906 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 14.54s 2026-04-11 07:31:20.130923 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 13.37s 2026-04-11 07:31:20.130961 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 13.02s 2026-04-11 07:31:20.130980 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 13.01s 2026-04-11 07:31:20.130999 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.27s 2026-04-11 07:31:20.131017 | orchestrator | octavia : Copying over config.json files for services ------------------- 7.32s 2026-04-11 07:31:20.131035 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.51s 2026-04-11 07:31:20.131053 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.30s 2026-04-11 07:31:20.131070 | orchestrator | service-check-containers : octavia | Check containers ------------------- 6.14s 2026-04-11 07:31:20.131104 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.99s 2026-04-11 07:31:20.131123 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.97s 2026-04-11 07:31:20.131172 | orchestrator | octavia : Get amphora flavor info --------------------------------------- 5.33s 2026-04-11 07:31:20.131195 | orchestrator | octavia : Get loadbalancer management network --------------------------- 4.57s 2026-04-11 07:31:20.131214 | orchestrator | octavia : Get service project id ---------------------------------------- 4.13s 2026-04-11 07:31:20.131233 | orchestrator | octavia : Ensuring config directories exist ----------------------------- 3.79s 2026-04-11 07:31:20.131252 | orchestrator | octavia : include_tasks ------------------------------------------------- 3.27s 2026-04-11 07:31:20.131271 | orchestrator | octavia : Copying over octavia-wsgi.conf -------------------------------- 2.77s 2026-04-11 07:31:20.337672 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-11 07:31:20.337782 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/310-openstack-extended.sh 2026-04-11 07:31:21.750587 | orchestrator | 2026-04-11 07:31:21 | INFO  | Prepare task for execution of gnocchi. 2026-04-11 07:31:21.820411 | orchestrator | 2026-04-11 07:31:21 | INFO  | Task 08be28f0-1a73-4ac7-9a13-baa4eebb7144 (gnocchi) was prepared for execution. 2026-04-11 07:31:21.820495 | orchestrator | 2026-04-11 07:31:21 | INFO  | It takes a moment until task 08be28f0-1a73-4ac7-9a13-baa4eebb7144 (gnocchi) has been started and output is visible here. 2026-04-11 07:31:35.515624 | orchestrator | 2026-04-11 07:31:35.515787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:31:35.515805 | orchestrator | 2026-04-11 07:31:35.515817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:31:35.515828 | orchestrator | Saturday 11 April 2026 07:31:26 +0000 (0:00:01.745) 0:00:01.745 ******** 2026-04-11 07:31:35.515839 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:31:35.515851 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:31:35.515862 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:31:35.515873 | orchestrator | 2026-04-11 07:31:35.515884 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:31:35.515895 | orchestrator | Saturday 11 April 2026 07:31:28 +0000 (0:00:01.712) 0:00:03.457 ******** 2026-04-11 07:31:35.515906 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-11 07:31:35.515918 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-11 07:31:35.515929 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-11 07:31:35.515940 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-11 07:31:35.515951 | orchestrator | 2026-04-11 07:31:35.515962 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-11 07:31:35.515973 | orchestrator | skipping: no hosts matched 2026-04-11 07:31:35.515985 | orchestrator | 2026-04-11 07:31:35.515996 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:31:35.516008 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 07:31:35.516020 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 07:31:35.516031 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-11 07:31:35.516042 | orchestrator | 2026-04-11 07:31:35.516053 | orchestrator | 2026-04-11 07:31:35.516063 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:31:35.516075 | orchestrator | Saturday 11 April 2026 07:31:35 +0000 (0:00:06.502) 0:00:09.960 ******** 2026-04-11 07:31:35.516085 | orchestrator | =============================================================================== 2026-04-11 07:31:35.516120 | orchestrator | Group hosts based on enabled services ----------------------------------- 6.50s 2026-04-11 07:31:35.516133 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.71s 2026-04-11 07:31:36.993289 | orchestrator | 2026-04-11 07:31:36 | INFO  | Prepare task for execution of manila. 2026-04-11 07:31:37.059348 | orchestrator | 2026-04-11 07:31:37 | INFO  | Task c2f48ac2-c58c-41d8-a17e-ef4a4ee77da9 (manila) was prepared for execution. 2026-04-11 07:31:37.059443 | orchestrator | 2026-04-11 07:31:37 | INFO  | It takes a moment until task c2f48ac2-c58c-41d8-a17e-ef4a4ee77da9 (manila) has been started and output is visible here. 2026-04-11 07:31:50.843846 | orchestrator | 2026-04-11 07:31:50.843978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:31:50.844002 | orchestrator | 2026-04-11 07:31:50.844037 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:31:50.844053 | orchestrator | Saturday 11 April 2026 07:31:42 +0000 (0:00:01.632) 0:00:01.632 ******** 2026-04-11 07:31:50.844069 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:31:50.844085 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:31:50.844101 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:31:50.844116 | orchestrator | 2026-04-11 07:31:50.844131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:31:50.844146 | orchestrator | Saturday 11 April 2026 07:31:43 +0000 (0:00:01.705) 0:00:03.337 ******** 2026-04-11 07:31:50.844161 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-11 07:31:50.844176 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-11 07:31:50.844192 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-11 07:31:50.844207 | orchestrator | 2026-04-11 07:31:50.844222 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-11 07:31:50.844237 | orchestrator | 2026-04-11 07:31:50.844252 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-11 07:31:50.844267 | orchestrator | Saturday 11 April 2026 07:31:45 +0000 (0:00:01.844) 0:00:05.182 ******** 2026-04-11 07:31:50.844282 | orchestrator | included: /ansible/roles/manila/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:31:50.844299 | orchestrator | 2026-04-11 07:31:50.844313 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-11 07:31:50.844328 | orchestrator | Saturday 11 April 2026 07:31:48 +0000 (0:00:02.704) 0:00:07.887 ******** 2026-04-11 07:31:50.844346 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:31:50.844366 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:31:50.844406 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:31:50.844448 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:31:50.844466 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:31:50.844483 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:31:50.844500 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:31:50.844526 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:31:50.844543 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:31:50.844574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:08.668700 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:08.668816 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:08.668834 | orchestrator | 2026-04-11 07:32:08.668848 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-11 07:32:08.668861 | orchestrator | Saturday 11 April 2026 07:31:52 +0000 (0:00:03.650) 0:00:11.537 ******** 2026-04-11 07:32:08.668873 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:32:08.668884 | orchestrator | 2026-04-11 07:32:08.668895 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-11 07:32:08.668906 | orchestrator | Saturday 11 April 2026 07:31:53 +0000 (0:00:01.832) 0:00:13.370 ******** 2026-04-11 07:32:08.668917 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:32:08.668952 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:32:08.668963 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:32:08.668974 | orchestrator | 2026-04-11 07:32:08.668985 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-11 07:32:08.668995 | orchestrator | Saturday 11 April 2026 07:31:55 +0000 (0:00:02.048) 0:00:15.419 ******** 2026-04-11 07:32:08.669007 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 07:32:08.669020 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 07:32:08.669031 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 07:32:08.669042 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 07:32:08.669053 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 07:32:08.669064 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 07:32:08.669075 | orchestrator | 2026-04-11 07:32:08.669086 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-11 07:32:08.669096 | orchestrator | Saturday 11 April 2026 07:31:58 +0000 (0:00:02.564) 0:00:17.983 ******** 2026-04-11 07:32:08.669108 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 07:32:08.669119 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 07:32:08.669130 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 07:32:08.669155 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 07:32:08.669182 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-11 07:32:08.669197 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-11 07:32:08.669210 | orchestrator | 2026-04-11 07:32:08.669224 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-11 07:32:08.669237 | orchestrator | Saturday 11 April 2026 07:32:00 +0000 (0:00:02.259) 0:00:20.243 ******** 2026-04-11 07:32:08.669250 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-11 07:32:08.669263 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-11 07:32:08.669276 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-11 07:32:08.669288 | orchestrator | 2026-04-11 07:32:08.669301 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-11 07:32:08.669314 | orchestrator | Saturday 11 April 2026 07:32:02 +0000 (0:00:01.926) 0:00:22.170 ******** 2026-04-11 07:32:08.669327 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:32:08.669340 | orchestrator | 2026-04-11 07:32:08.669353 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-11 07:32:08.669386 | orchestrator | Saturday 11 April 2026 07:32:03 +0000 (0:00:01.133) 0:00:23.304 ******** 2026-04-11 07:32:08.669399 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:32:08.669412 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:32:08.669425 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:32:08.669439 | orchestrator | 2026-04-11 07:32:08.669452 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-11 07:32:08.669465 | orchestrator | Saturday 11 April 2026 07:32:05 +0000 (0:00:01.343) 0:00:24.648 ******** 2026-04-11 07:32:08.669478 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:32:08.669491 | orchestrator | 2026-04-11 07:32:08.669503 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-11 07:32:08.669516 | orchestrator | Saturday 11 April 2026 07:32:07 +0000 (0:00:02.007) 0:00:26.656 ******** 2026-04-11 07:32:08.669531 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:08.669546 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:08.669573 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:12.755688 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755839 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755856 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755870 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755883 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755909 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755941 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755963 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755975 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:12.755987 | orchestrator | 2026-04-11 07:32:12.756000 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-11 07:32:12.756013 | orchestrator | Saturday 11 April 2026 07:32:12 +0000 (0:00:05.012) 0:00:31.669 ******** 2026-04-11 07:32:12.756026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:12.756039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:12.756065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:15.146073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:15.146157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:15.146166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:15.146172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:15.146178 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:32:15.146197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:15.146229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:15.146234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:15.146239 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:32:15.146243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:15.146248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:15.146253 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:32:15.146257 | orchestrator | 2026-04-11 07:32:15.146263 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-11 07:32:15.146269 | orchestrator | Saturday 11 April 2026 07:32:14 +0000 (0:00:02.309) 0:00:33.979 ******** 2026-04-11 07:32:15.146276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:15.146289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:18.422968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:18.423087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423161 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:32:18.423170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423863 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:32:18.423878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:18.423901 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:32:18.423908 | orchestrator | 2026-04-11 07:32:18.423915 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-11 07:32:18.423923 | orchestrator | Saturday 11 April 2026 07:32:16 +0000 (0:00:02.446) 0:00:36.426 ******** 2026-04-11 07:32:18.423934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:18.423950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:24.863769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:24.863908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.863965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.863979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.863991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.864023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.864035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.864047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.864066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.864082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:24.864095 | orchestrator | 2026-04-11 07:32:24.864108 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-11 07:32:24.864120 | orchestrator | Saturday 11 April 2026 07:32:22 +0000 (0:00:05.321) 0:00:41.747 ******** 2026-04-11 07:32:24.864132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:24.864153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:35.305358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:35.305632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:35.306425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:35.306464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:35.306486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:35.306533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:35.306619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:35.306635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:35.306656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:35.306669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:35.306681 | orchestrator | 2026-04-11 07:32:35.306695 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-11 07:32:35.306707 | orchestrator | Saturday 11 April 2026 07:32:30 +0000 (0:00:07.725) 0:00:49.473 ******** 2026-04-11 07:32:35.306719 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-11 07:32:35.306731 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-11 07:32:35.306742 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-11 07:32:35.306753 | orchestrator | 2026-04-11 07:32:35.306764 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-11 07:32:35.306775 | orchestrator | Saturday 11 April 2026 07:32:34 +0000 (0:00:04.702) 0:00:54.175 ******** 2026-04-11 07:32:35.306799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:38.379281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379431 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:32:38.379446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:38.379459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379535 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:32:38.379552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:38.379633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:38.379678 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:32:38.379689 | orchestrator | 2026-04-11 07:32:38.379701 | orchestrator | TASK [service-check-containers : manila | Check containers] ******************** 2026-04-11 07:32:38.379714 | orchestrator | Saturday 11 April 2026 07:32:37 +0000 (0:00:02.326) 0:00:56.502 ******** 2026-04-11 07:32:38.379736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:42.350532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:42.350641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:32:42.350650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-11 07:32:42.350725 | orchestrator | 2026-04-11 07:32:42.350730 | orchestrator | TASK [service-check-containers : manila | Notify handlers to restart containers] *** 2026-04-11 07:32:42.350735 | orchestrator | Saturday 11 April 2026 07:32:42 +0000 (0:00:04.976) 0:01:01.479 ******** 2026-04-11 07:32:42.350740 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:32:42.350746 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:32:42.350752 | orchestrator | } 2026-04-11 07:32:42.350758 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:32:42.350764 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:32:42.350769 | orchestrator | } 2026-04-11 07:32:42.350775 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:32:42.350784 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:32:44.307314 | orchestrator | } 2026-04-11 07:32:44.307426 | orchestrator | 2026-04-11 07:32:44.307443 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:32:44.307457 | orchestrator | Saturday 11 April 2026 07:32:43 +0000 (0:00:01.371) 0:01:02.851 ******** 2026-04-11 07:32:44.307490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:44.307506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:44.307541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:44.307600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:44.307612 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:32:44.307645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:44.307664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:44.307676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:32:44.307695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:32:44.307707 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:32:44.307718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:32:44.307729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-11 07:32:44.307749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-11 07:36:18.857209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-11 07:36:18.857396 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:36:18.857426 | orchestrator | 2026-04-11 07:36:18.857443 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-11 07:36:18.857460 | orchestrator | Saturday 11 April 2026 07:32:45 +0000 (0:00:02.526) 0:01:05.378 ******** 2026-04-11 07:36:18.857476 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:36:18.857519 | orchestrator | 2026-04-11 07:36:18.857535 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-11 07:36:18.857550 | orchestrator | Saturday 11 April 2026 07:33:05 +0000 (0:00:19.613) 0:01:24.991 ******** 2026-04-11 07:36:18.857565 | orchestrator | 2026-04-11 07:36:18.857578 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-11 07:36:18.857593 | orchestrator | Saturday 11 April 2026 07:33:05 +0000 (0:00:00.441) 0:01:25.433 ******** 2026-04-11 07:36:18.857608 | orchestrator | 2026-04-11 07:36:18.857623 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-11 07:36:18.857638 | orchestrator | Saturday 11 April 2026 07:33:06 +0000 (0:00:00.449) 0:01:25.882 ******** 2026-04-11 07:36:18.857652 | orchestrator | 2026-04-11 07:36:18.857666 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-11 07:36:18.857680 | orchestrator | Saturday 11 April 2026 07:33:07 +0000 (0:00:00.805) 0:01:26.687 ******** 2026-04-11 07:36:18.857696 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:36:18.857712 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:36:18.857728 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:36:18.857744 | orchestrator | 2026-04-11 07:36:18.857759 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-11 07:36:18.857775 | orchestrator | Saturday 11 April 2026 07:33:25 +0000 (0:00:17.894) 0:01:44.582 ******** 2026-04-11 07:36:18.857791 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:36:18.857807 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:36:18.857822 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:36:18.857835 | orchestrator | 2026-04-11 07:36:18.857846 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-11 07:36:18.857857 | orchestrator | Saturday 11 April 2026 07:33:43 +0000 (0:00:18.547) 0:02:03.129 ******** 2026-04-11 07:36:18.857867 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:36:18.857880 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:36:18.857892 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:36:18.857905 | orchestrator | 2026-04-11 07:36:18.857918 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-11 07:36:18.857931 | orchestrator | Saturday 11 April 2026 07:33:56 +0000 (0:00:12.806) 0:02:15.936 ******** 2026-04-11 07:36:18.857946 | orchestrator | 2026-04-11 07:36:18.857959 | orchestrator | STILL ALIVE [task 'manila : Restart manila-share container' is running] ******** 2026-04-11 07:36:18.857971 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:36:18.857985 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:36:18.857998 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:36:18.858011 | orchestrator | 2026-04-11 07:36:18.858083 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:36:18.858098 | orchestrator | testbed-node-0 : ok=21  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:36:18.858149 | orchestrator | testbed-node-1 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 07:36:18.858162 | orchestrator | testbed-node-2 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 07:36:18.858174 | orchestrator | 2026-04-11 07:36:18.858186 | orchestrator | 2026-04-11 07:36:18.858198 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:36:18.858209 | orchestrator | Saturday 11 April 2026 07:36:18 +0000 (0:02:21.889) 0:04:37.825 ******** 2026-04-11 07:36:18.858221 | orchestrator | =============================================================================== 2026-04-11 07:36:18.858258 | orchestrator | manila : Restart manila-share container ------------------------------- 141.89s 2026-04-11 07:36:18.858271 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 19.61s 2026-04-11 07:36:18.858282 | orchestrator | manila : Restart manila-data container --------------------------------- 18.55s 2026-04-11 07:36:18.858307 | orchestrator | manila : Restart manila-api container ---------------------------------- 17.90s 2026-04-11 07:36:18.858319 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 12.81s 2026-04-11 07:36:18.858330 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.73s 2026-04-11 07:36:18.858341 | orchestrator | manila : Copying over config.json files for services -------------------- 5.32s 2026-04-11 07:36:18.858352 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 5.01s 2026-04-11 07:36:18.858365 | orchestrator | service-check-containers : manila | Check containers -------------------- 4.98s 2026-04-11 07:36:18.858400 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.70s 2026-04-11 07:36:18.858413 | orchestrator | manila : Ensuring config directories exist ------------------------------ 3.65s 2026-04-11 07:36:18.858424 | orchestrator | manila : include_tasks -------------------------------------------------- 2.70s 2026-04-11 07:36:18.858436 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 2.56s 2026-04-11 07:36:18.858447 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.53s 2026-04-11 07:36:18.858470 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS key ------ 2.45s 2026-04-11 07:36:18.858482 | orchestrator | manila : Copying over existing policy file ------------------------------ 2.33s 2026-04-11 07:36:18.858494 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS certificate --- 2.31s 2026-04-11 07:36:18.858507 | orchestrator | manila : Copy over ceph Manila keyrings --------------------------------- 2.26s 2026-04-11 07:36:18.858519 | orchestrator | manila : Ensuring manila service ceph config subdir exists -------------- 2.05s 2026-04-11 07:36:18.858531 | orchestrator | manila : include_tasks -------------------------------------------------- 2.01s 2026-04-11 07:36:19.056967 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-11 07:36:19.057068 | orchestrator | + osism migrate rabbitmq3to4 delete 2026-04-11 07:36:25.394775 | orchestrator | 2026-04-11 07:36:25 | ERROR  | Unable to get ansible vault password 2026-04-11 07:36:25.394881 | orchestrator | 2026-04-11 07:36:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-11 07:36:25.394900 | orchestrator | 2026-04-11 07:36:25 | ERROR  | Dropping encrypted entries 2026-04-11 07:36:25.432371 | orchestrator | 2026-04-11 07:36:25 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-11 07:36:25.653751 | orchestrator | 2026-04-11 07:36:25 | INFO  | Found 128 classic queue(s) in vhost '/' 2026-04-11 07:36:25.727118 | orchestrator | 2026-04-11 07:36:25 | INFO  | Deleted queue: alarm.all.sample 2026-04-11 07:36:25.782005 | orchestrator | 2026-04-11 07:36:25 | INFO  | Deleted queue: alarming.sample 2026-04-11 07:36:25.823554 | orchestrator | 2026-04-11 07:36:25 | INFO  | Deleted queue: barbican.workers 2026-04-11 07:36:25.876751 | orchestrator | 2026-04-11 07:36:25 | INFO  | Deleted queue: barbican.workers.barbican.queue 2026-04-11 07:36:25.931429 | orchestrator | 2026-04-11 07:36:25 | INFO  | Deleted queue: barbican.workers_fanout_93d3a167a25d44af80c89a84b965a54e 2026-04-11 07:36:25.973803 | orchestrator | 2026-04-11 07:36:25 | INFO  | Deleted queue: barbican.workers_fanout_a21455ce0cd64d9490c52a8d0c98eba2 2026-04-11 07:36:26.045164 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: barbican.workers_fanout_eee0e8a9a94c4fac9c6524cd04318aaa 2026-04-11 07:36:26.109801 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: barbican_notifications.info 2026-04-11 07:36:26.152907 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central 2026-04-11 07:36:26.205407 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central.testbed-node-0 2026-04-11 07:36:26.244496 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central.testbed-node-1 2026-04-11 07:36:26.288032 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central.testbed-node-2 2026-04-11 07:36:26.332130 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central_fanout_011f0b2dc5b746ec86b1745fa898d349 2026-04-11 07:36:26.369608 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central_fanout_26dbae4cc52848938abfbc2eebc27595 2026-04-11 07:36:26.408544 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central_fanout_45cf1180b1c24418825232acd18acdf8 2026-04-11 07:36:26.449571 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central_fanout_7bfda07ac6d14424b344d7a0484abf5b 2026-04-11 07:36:26.487044 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central_fanout_d918290925824e469505683d84702175 2026-04-11 07:36:26.533556 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: central_fanout_e8f3c6f5de264ddebb970dfa21a9894d 2026-04-11 07:36:26.574066 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-backup 2026-04-11 07:36:26.621644 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-backup.testbed-node-0 2026-04-11 07:36:26.677132 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-backup.testbed-node-1 2026-04-11 07:36:26.720971 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-backup.testbed-node-2 2026-04-11 07:36:26.765998 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-scheduler 2026-04-11 07:36:26.808364 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-scheduler.testbed-node-0 2026-04-11 07:36:26.850412 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-scheduler.testbed-node-1 2026-04-11 07:36:26.894144 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-scheduler.testbed-node-2 2026-04-11 07:36:26.937325 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-volume 2026-04-11 07:36:26.987274 | orchestrator | 2026-04-11 07:36:26 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes 2026-04-11 07:36:27.036202 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 2026-04-11 07:36:27.078630 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes 2026-04-11 07:36:27.121888 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 2026-04-11 07:36:27.163738 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes 2026-04-11 07:36:27.214486 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 2026-04-11 07:36:27.252000 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: compute 2026-04-11 07:36:27.302900 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: compute.testbed-node-3 2026-04-11 07:36:27.354212 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: compute.testbed-node-4 2026-04-11 07:36:27.398153 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: compute.testbed-node-5 2026-04-11 07:36:27.449720 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: conductor 2026-04-11 07:36:27.497201 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: conductor.testbed-node-0 2026-04-11 07:36:27.551173 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: conductor.testbed-node-1 2026-04-11 07:36:27.601049 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: conductor.testbed-node-2 2026-04-11 07:36:27.655180 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: event.sample 2026-04-11 07:36:27.679736 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.11:38212 -> 192.168.16.10:5672 2026-04-11 07:36:27.698331 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.12:53264 -> 192.168.16.11:5672 2026-04-11 07:36:27.714456 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.11:38096 -> 192.168.16.10:5672 2026-04-11 07:36:27.730789 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.11:38056 -> 192.168.16.10:5672 2026-04-11 07:36:27.748085 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.12:58662 -> 192.168.16.10:5672 2026-04-11 07:36:27.768313 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.10:47976 -> 192.168.16.11:5672 2026-04-11 07:36:27.791324 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.10:51514 -> 192.168.16.10:5672 2026-04-11 07:36:27.805956 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.10:51528 -> 192.168.16.10:5672 2026-04-11 07:36:27.820168 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed connection: 192.168.16.12:53262 -> 192.168.16.11:5672 2026-04-11 07:36:27.820435 | orchestrator | 2026-04-11 07:36:27 | INFO  | Closed 9 connection(s) for queue: magnum-conductor 2026-04-11 07:36:27.853746 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: magnum-conductor 2026-04-11 07:36:27.900509 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: magnum-conductor.o324b5nldxgs 2026-04-11 07:36:27.954842 | orchestrator | 2026-04-11 07:36:27 | INFO  | Deleted queue: magnum-conductor.r6ouq6chauay 2026-04-11 07:36:28.009857 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor.zveegrxdgu45 2026-04-11 07:36:28.062706 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_36aeb47ca98d40ca86774a1a253f6fbc 2026-04-11 07:36:28.106796 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_529f02b99eaa4bf48a5a44e4fa79144a 2026-04-11 07:36:28.150812 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_7056a329ee6e491e9084674a0b2a88da 2026-04-11 07:36:28.194181 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_85f2fca774fc41d29c93a791ce2daf49 2026-04-11 07:36:28.238974 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_9022d045491f438098cb251c983400ef 2026-04-11 07:36:28.275845 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_a4fc77b03dbe402abe0c4054ee018432 2026-04-11 07:36:28.319203 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_d70d8595e8524a0ca43ca8a3441152fe 2026-04-11 07:36:28.367812 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_ea2c139c802d404b80bc37760845f7c4 2026-04-11 07:36:28.412313 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: magnum-conductor_fanout_f74f1fa9920e458aa7ed0976c72b19f6 2026-04-11 07:36:28.456974 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-data 2026-04-11 07:36:28.506717 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-data.testbed-node-0 2026-04-11 07:36:28.567619 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-data.testbed-node-1 2026-04-11 07:36:28.621247 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-data.testbed-node-2 2026-04-11 07:36:28.663339 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-scheduler 2026-04-11 07:36:28.709197 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-scheduler.testbed-node-0 2026-04-11 07:36:28.758690 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-scheduler.testbed-node-1 2026-04-11 07:36:28.798133 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-scheduler.testbed-node-2 2026-04-11 07:36:28.830741 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-share 2026-04-11 07:36:28.879162 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-share.testbed-node-0@cephfsnative1 2026-04-11 07:36:28.921289 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-share.testbed-node-1@cephfsnative1 2026-04-11 07:36:28.968518 | orchestrator | 2026-04-11 07:36:28 | INFO  | Deleted queue: manila-share.testbed-node-2@cephfsnative1 2026-04-11 07:36:29.007133 | orchestrator | 2026-04-11 07:36:29 | INFO  | Deleted queue: manila-share_fanout_565c3c86aa52421bb3c8a41fbe690d14 2026-04-11 07:36:29.045288 | orchestrator | 2026-04-11 07:36:29 | INFO  | Deleted queue: manila-share_fanout_a8f0cdd363e740f09e8fdc5f458ac48b 2026-04-11 07:36:29.103510 | orchestrator | 2026-04-11 07:36:29 | INFO  | Deleted queue: manila-share_fanout_ec32c8e306434e819788b9c8e0f9f5a9 2026-04-11 07:36:29.285741 | orchestrator | 2026-04-11 07:36:29 | INFO  | Deleted queue: notifications.audit 2026-04-11 07:36:29.435831 | orchestrator | 2026-04-11 07:36:29 | INFO  | Deleted queue: notifications.critical 2026-04-11 07:36:29.561125 | orchestrator | 2026-04-11 07:36:29 | INFO  | Deleted queue: notifications.debug 2026-04-11 07:36:29.718324 | orchestrator | 2026-04-11 07:36:29 | INFO  | Deleted queue: notifications.error 2026-04-11 07:36:29.861767 | orchestrator | 2026-04-11 07:36:29 | INFO  | Deleted queue: notifications.info 2026-04-11 07:36:30.015444 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: notifications.sample 2026-04-11 07:36:30.211203 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: notifications.warn 2026-04-11 07:36:30.256635 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: octavia_provisioning_v2 2026-04-11 07:36:30.301724 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-0 2026-04-11 07:36:30.346724 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-1 2026-04-11 07:36:30.395544 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-2 2026-04-11 07:36:30.429617 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer 2026-04-11 07:36:30.469403 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer.testbed-node-0 2026-04-11 07:36:30.516458 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer.testbed-node-1 2026-04-11 07:36:30.565191 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer.testbed-node-2 2026-04-11 07:36:30.607648 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer_fanout_9f1a725d639f46888d90c57ecc9ee848 2026-04-11 07:36:30.648419 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer_fanout_a634e34ba9ea404faafe3bf5730b5ba5 2026-04-11 07:36:30.687011 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer_fanout_ba9194b5aec24ea29dafcc2879d00ec3 2026-04-11 07:36:30.715537 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer_fanout_c83cb3ee9e174b9c80c40eb19369966b 2026-04-11 07:36:30.754667 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer_fanout_e0c68bf2cc374e548a0b18f5c7a93cec 2026-04-11 07:36:30.796827 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: producer_fanout_ede84d5c924841349b5cfbbc0182dde4 2026-04-11 07:36:30.835683 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: q-plugin 2026-04-11 07:36:30.876152 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: q-plugin.testbed-node-0 2026-04-11 07:36:30.913545 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: q-plugin.testbed-node-1 2026-04-11 07:36:30.963411 | orchestrator | 2026-04-11 07:36:30 | INFO  | Deleted queue: q-plugin.testbed-node-2 2026-04-11 07:36:31.008118 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: q-reports-plugin 2026-04-11 07:36:31.054676 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: q-reports-plugin.testbed-node-0 2026-04-11 07:36:31.092816 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: q-reports-plugin.testbed-node-1 2026-04-11 07:36:31.153923 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: q-reports-plugin.testbed-node-2 2026-04-11 07:36:31.190163 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: q-server-resource-versions 2026-04-11 07:36:31.234890 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-0 2026-04-11 07:36:31.280460 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-1 2026-04-11 07:36:31.329570 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-2 2026-04-11 07:36:31.362596 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_0daaa29cc4d84169860c6ab45c4675e4 2026-04-11 07:36:31.404464 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_109447c1ca3a41f39d6e2ec21b80b484 2026-04-11 07:36:31.442189 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_11ea9882ffee40f590963ddf26a9c44f 2026-04-11 07:36:31.480192 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_2668994006af415582d1d1328c311c9c 2026-04-11 07:36:31.514273 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_2893e8cb0b96480fbb895a2af46ce3e0 2026-04-11 07:36:31.554474 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_390d12fe8ad24957a094a3eba3f3c352 2026-04-11 07:36:31.592820 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_91cdc63b1929484fb335edd1f8b44de2 2026-04-11 07:36:31.630667 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_96169ce8fa5e4886bd823d1b54763cd7 2026-04-11 07:36:31.667678 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_af46f20d0fc8428aa5478fbcdf262250 2026-04-11 07:36:31.708344 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_b006677a74be419f839cc1da974143ed 2026-04-11 07:36:31.742885 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: reply_ed74af6d33854ee69438efaa3cad3b17 2026-04-11 07:36:31.786973 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: scheduler 2026-04-11 07:36:31.833318 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: scheduler.testbed-node-0 2026-04-11 07:36:31.875477 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: scheduler.testbed-node-1 2026-04-11 07:36:31.920081 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: scheduler.testbed-node-2 2026-04-11 07:36:31.967743 | orchestrator | 2026-04-11 07:36:31 | INFO  | Deleted queue: worker 2026-04-11 07:36:32.013516 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker.testbed-node-0 2026-04-11 07:36:32.059867 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker.testbed-node-1 2026-04-11 07:36:32.110976 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker.testbed-node-2 2026-04-11 07:36:32.148615 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker_fanout_2943ced9ed8d4bde94bb5573bdc84727 2026-04-11 07:36:32.190484 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker_fanout_4b11bf236f3f4748bfa634ff1a7e52f3 2026-04-11 07:36:32.226728 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker_fanout_658ad6cb2afe44c9a719dd869579764b 2026-04-11 07:36:32.271269 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker_fanout_aa757db9dc4c4c029dfdb66c83157496 2026-04-11 07:36:32.316639 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker_fanout_cfd6b64dc79f43059f1605556fde5765 2026-04-11 07:36:32.363343 | orchestrator | 2026-04-11 07:36:32 | INFO  | Deleted queue: worker_fanout_d884b62acd7949969177ebe0c5d70ec4 2026-04-11 07:36:32.363446 | orchestrator | 2026-04-11 07:36:32 | INFO  | Successfully deleted 128 queue(s) in vhost '/' 2026-04-11 07:36:32.621505 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-11 07:36:38.919300 | orchestrator | 2026-04-11 07:36:38 | ERROR  | Unable to get ansible vault password 2026-04-11 07:36:38.919432 | orchestrator | 2026-04-11 07:36:38 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-11 07:36:38.919456 | orchestrator | 2026-04-11 07:36:38 | ERROR  | Dropping encrypted entries 2026-04-11 07:36:38.953265 | orchestrator | 2026-04-11 07:36:38 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-11 07:36:39.155602 | orchestrator | 2026-04-11 07:36:39 | INFO  | Found 13 classic queue(s) in vhost '/': 2026-04-11 07:36:39.155728 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-11 07:36:39.155756 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor.o324b5nldxgs (vhost: /, messages: 0) 2026-04-11 07:36:39.155777 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor.r6ouq6chauay (vhost: /, messages: 0) 2026-04-11 07:36:39.155797 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor.zveegrxdgu45 (vhost: /, messages: 0) 2026-04-11 07:36:39.155818 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_36aeb47ca98d40ca86774a1a253f6fbc (vhost: /, messages: 0) 2026-04-11 07:36:39.155861 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_529f02b99eaa4bf48a5a44e4fa79144a (vhost: /, messages: 0) 2026-04-11 07:36:39.155896 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_7056a329ee6e491e9084674a0b2a88da (vhost: /, messages: 0) 2026-04-11 07:36:39.155914 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_85f2fca774fc41d29c93a791ce2daf49 (vhost: /, messages: 0) 2026-04-11 07:36:39.155933 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_9022d045491f438098cb251c983400ef (vhost: /, messages: 0) 2026-04-11 07:36:39.155951 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_a4fc77b03dbe402abe0c4054ee018432 (vhost: /, messages: 0) 2026-04-11 07:36:39.155969 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_d70d8595e8524a0ca43ca8a3441152fe (vhost: /, messages: 0) 2026-04-11 07:36:39.155988 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_ea2c139c802d404b80bc37760845f7c4 (vhost: /, messages: 0) 2026-04-11 07:36:39.156170 | orchestrator | 2026-04-11 07:36:39 | INFO  |  - magnum-conductor_fanout_f74f1fa9920e458aa7ed0976c72b19f6 (vhost: /, messages: 0) 2026-04-11 07:36:39.428378 | orchestrator | + osism migrate rabbitmq3to4 list --vhost openstack --quorum 2026-04-11 07:36:45.861751 | orchestrator | 2026-04-11 07:36:45 | ERROR  | Unable to get ansible vault password 2026-04-11 07:36:45.861861 | orchestrator | 2026-04-11 07:36:45 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-11 07:36:45.861878 | orchestrator | 2026-04-11 07:36:45 | ERROR  | Dropping encrypted entries 2026-04-11 07:36:45.895302 | orchestrator | 2026-04-11 07:36:45 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-11 07:36:46.136484 | orchestrator | 2026-04-11 07:36:46 | INFO  | Found 192 quorum queue(s) in vhost 'openstack': 2026-04-11 07:36:46.136607 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - alarm.all.sample (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136629 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - alarming.sample (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136646 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - barbican.workers (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136659 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - barbican.workers.barbican.queue (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136748 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - barbican.workers_fanout_testbed-node-0:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136800 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - barbican.workers_fanout_testbed-node-1:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136809 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - barbican.workers_fanout_testbed-node-2:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136817 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - barbican_notifications.info (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136863 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central (vhost: openstack, messages: 0) 2026-04-11 07:36:46.136872 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137246 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137264 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137272 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central_fanout_testbed-node-0:designate-central:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137280 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central_fanout_testbed-node-0:designate-central:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137288 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central_fanout_testbed-node-1:designate-central:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137558 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central_fanout_testbed-node-1:designate-central:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137572 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central_fanout_testbed-node-2:designate-central:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137580 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - central_fanout_testbed-node-2:designate-central:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137588 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-backup (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137931 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-backup.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137969 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-backup.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137982 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-backup.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.137994 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-backup_fanout_testbed-node-0:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.138006 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-backup_fanout_testbed-node-1:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.138194 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-backup_fanout_testbed-node-2:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.138601 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-scheduler (vhost: openstack, messages: 0) 2026-04-11 07:36:46.138623 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.138632 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139049 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139064 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-scheduler_fanout_testbed-node-0:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139072 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-scheduler_fanout_testbed-node-1:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139080 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-scheduler_fanout_testbed-node-2:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139087 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139094 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139102 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139109 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_testbed-node-0:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139118 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139378 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139392 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_testbed-node-1:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139407 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139415 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139651 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_testbed-node-2:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139664 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume_fanout_testbed-node-0:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139671 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume_fanout_testbed-node-1:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139690 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - cinder-volume_fanout_testbed-node-2:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139697 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - compute (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139908 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - compute.testbed-node-3 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139921 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - compute.testbed-node-4 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.139929 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - compute.testbed-node-5 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.140229 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - compute_fanout_testbed-node-3:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.140242 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - compute_fanout_testbed-node-4:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.140249 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - compute_fanout_testbed-node-5:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.140256 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor (vhost: openstack, messages: 0) 2026-04-11 07:36:46.140491 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.140503 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.140511 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.140518 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141254 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141340 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141360 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141374 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141387 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141403 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - event.sample (vhost: openstack, messages: 5) 2026-04-11 07:36:46.141647 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-data (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141673 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-data.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141687 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-data.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141701 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-data.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141714 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-data_fanout_testbed-node-0:manila-data:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141729 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-data_fanout_testbed-node-1:manila-data:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141742 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-data_fanout_testbed-node-2:manila-data:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141887 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-scheduler (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141923 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141938 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141952 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141966 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-scheduler_fanout_testbed-node-0:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.141980 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-scheduler_fanout_testbed-node-1:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142246 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-scheduler_fanout_testbed-node-2:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142690 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-share (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142715 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142747 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142761 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142776 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-share_fanout_testbed-node-0:manila-share:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142788 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-share_fanout_testbed-node-1:manila-share:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142801 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - manila-share_fanout_testbed-node-2:manila-share:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142813 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - notifications.audit (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142906 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - notifications.critical (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142925 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - notifications.debug (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142939 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - notifications.error (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142952 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - notifications.info (vhost: openstack, messages: 0) 2026-04-11 07:36:46.142965 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - notifications.sample (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143148 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - notifications.warn (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143171 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - octavia_provisioning_v2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143185 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143229 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143244 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143497 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-0:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143548 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-1:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143561 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-2:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143572 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - osism-listener-cinder (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143651 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - osism-listener-glance (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143668 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - osism-listener-ironic (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143679 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - osism-listener-keystone (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143775 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - osism-listener-neutron (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143792 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - osism-listener-nova (vhost: openstack, messages: 0) 2026-04-11 07:36:46.143803 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144017 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144038 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144375 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144395 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144407 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144418 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144430 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144690 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144710 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144723 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144734 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144746 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144836 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144854 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.144865 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145074 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145092 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145118 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145413 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145436 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145448 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145458 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145469 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145740 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145758 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145769 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145779 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145790 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.145809 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146005 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146059 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146071 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146082 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146398 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146422 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146433 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146442 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146452 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146783 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146801 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146838 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146851 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146860 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.146937 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147423 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147444 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147456 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147467 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147478 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147489 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147500 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147511 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147521 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147618 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147634 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147654 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147666 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.147676 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-0:designate-manage:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148054 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-0:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148076 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-0:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148087 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-1:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148098 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-1:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148109 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-2:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148139 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-2:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148150 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-3:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148161 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-4:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148506 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - reply_testbed-node-5:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148527 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148538 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148549 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148559 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.148570 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149165 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149218 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149226 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149232 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149242 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149313 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149326 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149335 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149511 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149530 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149539 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149549 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149558 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149565 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.149578 | orchestrator | 2026-04-11 07:36:46 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-11 07:36:46.393946 | orchestrator | + osism migrate rabbitmq3to4 delete-exchanges 2026-04-11 07:36:52.854491 | orchestrator | 2026-04-11 07:36:52 | ERROR  | Unable to get ansible vault password 2026-04-11 07:36:52.854600 | orchestrator | 2026-04-11 07:36:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-11 07:36:52.854646 | orchestrator | 2026-04-11 07:36:52 | ERROR  | Dropping encrypted entries 2026-04-11 07:36:52.887907 | orchestrator | 2026-04-11 07:36:52 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-11 07:36:52.907766 | orchestrator | 2026-04-11 07:36:52 | INFO  | Found 27 exchange(s) in vhost '/' 2026-04-11 07:36:52.940397 | orchestrator | 2026-04-11 07:36:52 | INFO  | Deleted exchange: aodh 2026-04-11 07:36:52.968563 | orchestrator | 2026-04-11 07:36:52 | INFO  | Deleted exchange: ceilometer 2026-04-11 07:36:52.997641 | orchestrator | 2026-04-11 07:36:52 | INFO  | Deleted exchange: cinder 2026-04-11 07:36:53.028661 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: designate 2026-04-11 07:36:53.066101 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: dns 2026-04-11 07:36:53.115841 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: glance 2026-04-11 07:36:53.151165 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: heat 2026-04-11 07:36:53.194642 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: ironic 2026-04-11 07:36:53.227815 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: keystone 2026-04-11 07:36:53.257361 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: l3_agent_fanout 2026-04-11 07:36:53.305663 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: magnum 2026-04-11 07:36:53.354361 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: magnum-conductor_fanout 2026-04-11 07:36:53.383960 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: neutron 2026-04-11 07:36:53.421844 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: neutron-vo-Network-1.1_fanout 2026-04-11 07:36:53.471148 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: neutron-vo-Port-1.10_fanout 2026-04-11 07:36:53.505053 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: neutron-vo-SecurityGroup-1.6_fanout 2026-04-11 07:36:53.535020 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: neutron-vo-SecurityGroupRule-1.3_fanout 2026-04-11 07:36:53.561592 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: neutron-vo-Subnet-1.2_fanout 2026-04-11 07:36:53.605161 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: nova 2026-04-11 07:36:53.644969 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: octavia 2026-04-11 07:36:53.671727 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: openstack 2026-04-11 07:36:53.703938 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: q-agent-notifier-port-update_fanout 2026-04-11 07:36:53.731815 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: q-agent-notifier-security_group-update_fanout 2026-04-11 07:36:53.759512 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: scheduler_fanout 2026-04-11 07:36:53.813327 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: swift 2026-04-11 07:36:53.847269 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: trove 2026-04-11 07:36:53.884614 | orchestrator | 2026-04-11 07:36:53 | INFO  | Deleted exchange: zaqar 2026-04-11 07:36:53.884729 | orchestrator | 2026-04-11 07:36:53 | INFO  | Successfully deleted 27 exchange(s) in vhost '/' 2026-04-11 07:36:54.147519 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-11 07:37:00.637259 | orchestrator | 2026-04-11 07:37:00 | ERROR  | Unable to get ansible vault password 2026-04-11 07:37:00.637352 | orchestrator | 2026-04-11 07:37:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-11 07:37:00.637361 | orchestrator | 2026-04-11 07:37:00 | ERROR  | Dropping encrypted entries 2026-04-11 07:37:00.671428 | orchestrator | 2026-04-11 07:37:00 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-11 07:37:00.683859 | orchestrator | 2026-04-11 07:37:00 | INFO  | No exchanges found in vhost '/' 2026-04-11 07:37:00.920949 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-11 07:37:00.921046 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/400-monitoring.sh 2026-04-11 07:37:02.218729 | orchestrator | 2026-04-11 07:37:02 | INFO  | Prepare task for execution of prometheus. 2026-04-11 07:37:02.284793 | orchestrator | 2026-04-11 07:37:02 | INFO  | Task 961adc97-cebb-4eaa-8c53-b6b39f61ef9e (prometheus) was prepared for execution. 2026-04-11 07:37:02.284888 | orchestrator | 2026-04-11 07:37:02 | INFO  | It takes a moment until task 961adc97-cebb-4eaa-8c53-b6b39f61ef9e (prometheus) has been started and output is visible here. 2026-04-11 07:37:19.560472 | orchestrator | 2026-04-11 07:37:19.560579 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:37:19.560607 | orchestrator | 2026-04-11 07:37:19.560629 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:37:19.560651 | orchestrator | Saturday 11 April 2026 07:37:07 +0000 (0:00:01.671) 0:00:01.671 ******** 2026-04-11 07:37:19.560671 | orchestrator | ok: [testbed-manager] 2026-04-11 07:37:19.560687 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:37:19.560698 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:37:19.560709 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:37:19.560720 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:37:19.560731 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:37:19.560741 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:37:19.560752 | orchestrator | 2026-04-11 07:37:19.560763 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:37:19.560774 | orchestrator | Saturday 11 April 2026 07:37:09 +0000 (0:00:02.663) 0:00:04.335 ******** 2026-04-11 07:37:19.560786 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-11 07:37:19.560797 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-11 07:37:19.560808 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-11 07:37:19.560819 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-11 07:37:19.560829 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-11 07:37:19.560840 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-11 07:37:19.560851 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-11 07:37:19.560862 | orchestrator | 2026-04-11 07:37:19.560873 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-11 07:37:19.560884 | orchestrator | 2026-04-11 07:37:19.560895 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-11 07:37:19.560906 | orchestrator | Saturday 11 April 2026 07:37:12 +0000 (0:00:02.460) 0:00:06.795 ******** 2026-04-11 07:37:19.560917 | orchestrator | included: /ansible/roles/prometheus/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 07:37:19.560929 | orchestrator | 2026-04-11 07:37:19.560940 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-11 07:37:19.560951 | orchestrator | Saturday 11 April 2026 07:37:16 +0000 (0:00:04.380) 0:00:11.176 ******** 2026-04-11 07:37:19.560967 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 07:37:19.561004 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:19.561029 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:19.561062 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:19.561076 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:19.561090 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:19.561102 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:19.561123 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:19.561136 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:19.561149 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:19.561198 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:19.561219 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400561 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:20.400656 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400694 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:20.400707 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400728 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:20.400765 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:37:20.400813 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400833 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:20.400845 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400866 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400877 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400889 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400906 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:20.400918 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:20.400949 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:27.440912 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:27.441078 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:27.441106 | orchestrator | 2026-04-11 07:37:27.441128 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-11 07:37:27.441199 | orchestrator | Saturday 11 April 2026 07:37:21 +0000 (0:00:04.837) 0:00:16.013 ******** 2026-04-11 07:37:27.441220 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-11 07:37:27.441282 | orchestrator | 2026-04-11 07:37:27.441304 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-11 07:37:27.441322 | orchestrator | Saturday 11 April 2026 07:37:24 +0000 (0:00:02.923) 0:00:18.937 ******** 2026-04-11 07:37:27.441344 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 07:37:27.441366 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:27.441385 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:27.441431 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:27.441470 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:27.441489 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:27.441507 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:27.441574 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:27.441599 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:27.441625 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:27.441647 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:27.441694 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.357773 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.357864 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.357874 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.357883 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:29.357905 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:29.357913 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:29.357939 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.357961 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.357968 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.357977 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:37:29.357985 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.357996 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.358003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:29.358051 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:29.358066 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:32.278234 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:32.278360 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:32.278378 | orchestrator | 2026-04-11 07:37:32.278391 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-11 07:37:32.278404 | orchestrator | Saturday 11 April 2026 07:37:30 +0000 (0:00:06.243) 0:00:25.180 ******** 2026-04-11 07:37:32.278436 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 07:37:32.278474 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:32.278487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:32.278499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:32.278530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:32.278543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:32.278555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:32.278566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:32.278583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:32.278603 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:37:32.278623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:33.151010 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:37:33.151132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:33.151196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:33.151211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:33.151240 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:33.151275 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:37:33.151287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:33.151299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:37:33.151310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:33.151321 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:37:33.151332 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:37:33.151363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:33.151375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:33.151387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:33.151411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:33.151424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:33.151435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:33.151447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:37:33.151458 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:37:33.151477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:35.785692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:37:35.785792 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:37:35.785811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:35.785844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:35.785855 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:37:35.785866 | orchestrator | 2026-04-11 07:37:35.785878 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-11 07:37:35.785904 | orchestrator | Saturday 11 April 2026 07:37:34 +0000 (0:00:03.575) 0:00:28.756 ******** 2026-04-11 07:37:35.785915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:35.785930 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 07:37:35.785944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:35.785973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:35.785985 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:35.786003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:35.786074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:35.786087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:35.786098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:35.786110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:35.786131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:36.663956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:36.664075 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:37:36.664100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:36.664110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:36.664118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:36.664126 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:37:36.664171 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:36.664181 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:37:36.664206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:36.664227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:36.664235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:36.664246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:37:36.664255 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:37:36.664262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:36.664270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:36.664278 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:37:36.664285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:37:36.664300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:37:41.423116 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:37:41.423255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:41.423270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:37:41.423295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:37:41.423304 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:37:41.423313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:37:41.423322 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:37:41.423330 | orchestrator | 2026-04-11 07:37:41.423340 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-11 07:37:41.423349 | orchestrator | Saturday 11 April 2026 07:37:38 +0000 (0:00:04.039) 0:00:32.795 ******** 2026-04-11 07:37:41.423360 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 07:37:41.423402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:41.423413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:41.423422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:41.423435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:41.423444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:41.423452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:41.423460 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:37:41.423469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:41.423490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:43.652599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:43.652710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.652743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.652757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.652771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.652792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:43.652840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:43.652885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:37:43.652909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.652922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.652941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.652956 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:37:43.652979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.652991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:37:43.653011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:38:19.417810 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:38:19.417946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:38:19.418004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:38:19.418074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:38:19.418160 | orchestrator | 2026-04-11 07:38:19.418175 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-11 07:38:19.418188 | orchestrator | Saturday 11 April 2026 07:37:45 +0000 (0:00:07.440) 0:00:40.236 ******** 2026-04-11 07:38:19.418200 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 07:38:19.418213 | orchestrator | 2026-04-11 07:38:19.418224 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-11 07:38:19.418235 | orchestrator | Saturday 11 April 2026 07:37:48 +0000 (0:00:02.350) 0:00:42.587 ******** 2026-04-11 07:38:19.418247 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:38:19.418258 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:19.418270 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:19.418281 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:19.418291 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:19.418302 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:19.418313 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:19.418324 | orchestrator | 2026-04-11 07:38:19.418336 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-11 07:38:19.418349 | orchestrator | Saturday 11 April 2026 07:37:50 +0000 (0:00:01.968) 0:00:44.555 ******** 2026-04-11 07:38:19.418361 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 07:38:19.418374 | orchestrator | 2026-04-11 07:38:19.418386 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-11 07:38:19.418399 | orchestrator | Saturday 11 April 2026 07:37:52 +0000 (0:00:01.868) 0:00:46.424 ******** 2026-04-11 07:38:19.418412 | orchestrator | [WARNING]: Skipped 2026-04-11 07:38:19.418426 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418440 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-11 07:38:19.418453 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418465 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-11 07:38:19.418478 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 07:38:19.418491 | orchestrator | [WARNING]: Skipped 2026-04-11 07:38:19.418504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418516 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-11 07:38:19.418529 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418541 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-11 07:38:19.418554 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:38:19.418567 | orchestrator | [WARNING]: Skipped 2026-04-11 07:38:19.418580 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418593 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-11 07:38:19.418625 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418638 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-11 07:38:19.418650 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-11 07:38:19.418663 | orchestrator | [WARNING]: Skipped 2026-04-11 07:38:19.418675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418688 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-11 07:38:19.418701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418712 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-11 07:38:19.418723 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-11 07:38:19.418734 | orchestrator | [WARNING]: Skipped 2026-04-11 07:38:19.418744 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418755 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-11 07:38:19.418774 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418792 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-11 07:38:19.418803 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-11 07:38:19.418814 | orchestrator | [WARNING]: Skipped 2026-04-11 07:38:19.418825 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418835 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-11 07:38:19.418846 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418857 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-11 07:38:19.418868 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-11 07:38:19.418879 | orchestrator | [WARNING]: Skipped 2026-04-11 07:38:19.418889 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418900 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-11 07:38:19.418911 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-11 07:38:19.418922 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-11 07:38:19.418932 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-11 07:38:19.418943 | orchestrator | 2026-04-11 07:38:19.418954 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-11 07:38:19.418971 | orchestrator | Saturday 11 April 2026 07:37:55 +0000 (0:00:03.582) 0:00:50.007 ******** 2026-04-11 07:38:19.418989 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 07:38:19.419011 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 07:38:19.419030 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:19.419050 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:19.419069 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 07:38:19.419113 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:19.419131 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 07:38:19.419150 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:19.419168 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 07:38:19.419188 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:19.419204 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-11 07:38:19.419215 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:19.419226 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-11 07:38:19.419237 | orchestrator | 2026-04-11 07:38:19.419248 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-11 07:38:19.419259 | orchestrator | Saturday 11 April 2026 07:38:13 +0000 (0:00:18.258) 0:01:08.265 ******** 2026-04-11 07:38:19.419270 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 07:38:19.419280 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:19.419291 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 07:38:19.419302 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 07:38:19.419313 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:19.419324 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:19.419335 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 07:38:19.419346 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:19.419357 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 07:38:19.419377 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:19.419389 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-11 07:38:19.419399 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:19.419411 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-11 07:38:19.419421 | orchestrator | 2026-04-11 07:38:19.419432 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-11 07:38:19.419443 | orchestrator | Saturday 11 April 2026 07:38:18 +0000 (0:00:04.928) 0:01:13.194 ******** 2026-04-11 07:38:19.419464 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 07:38:59.423271 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 07:38:59.423414 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.423435 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.423448 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 07:38:59.423460 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 07:38:59.423472 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.423483 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.423494 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-11 07:38:59.423523 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 07:38:59.423534 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.423545 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-11 07:38:59.423557 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.423568 | orchestrator | 2026-04-11 07:38:59.423580 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-11 07:38:59.423592 | orchestrator | Saturday 11 April 2026 07:38:21 +0000 (0:00:02.893) 0:01:16.088 ******** 2026-04-11 07:38:59.423603 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 07:38:59.423615 | orchestrator | 2026-04-11 07:38:59.423626 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-11 07:38:59.423638 | orchestrator | Saturday 11 April 2026 07:38:23 +0000 (0:00:01.757) 0:01:17.845 ******** 2026-04-11 07:38:59.423649 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:38:59.423660 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.423671 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.423682 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.423693 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.423704 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.423715 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.423726 | orchestrator | 2026-04-11 07:38:59.423737 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-11 07:38:59.423748 | orchestrator | Saturday 11 April 2026 07:38:25 +0000 (0:00:01.959) 0:01:19.805 ******** 2026-04-11 07:38:59.423761 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:38:59.423774 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.423786 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.423798 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.423810 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:38:59.423823 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:38:59.423836 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:38:59.423849 | orchestrator | 2026-04-11 07:38:59.423884 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-11 07:38:59.423897 | orchestrator | Saturday 11 April 2026 07:38:28 +0000 (0:00:03.225) 0:01:23.030 ******** 2026-04-11 07:38:59.423910 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 07:38:59.423923 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 07:38:59.423936 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 07:38:59.423948 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 07:38:59.423961 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:38:59.423974 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.423986 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.423999 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.424011 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 07:38:59.424024 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.424057 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 07:38:59.424068 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.424079 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-11 07:38:59.424090 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.424101 | orchestrator | 2026-04-11 07:38:59.424112 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-11 07:38:59.424123 | orchestrator | Saturday 11 April 2026 07:38:31 +0000 (0:00:02.955) 0:01:25.986 ******** 2026-04-11 07:38:59.424134 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 07:38:59.424145 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.424156 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 07:38:59.424166 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.424177 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 07:38:59.424188 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.424199 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 07:38:59.424210 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.424240 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 07:38:59.424251 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.424262 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-11 07:38:59.424273 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.424284 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-11 07:38:59.424294 | orchestrator | 2026-04-11 07:38:59.424305 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-11 07:38:59.424316 | orchestrator | Saturday 11 April 2026 07:38:34 +0000 (0:00:02.828) 0:01:28.814 ******** 2026-04-11 07:38:59.424327 | orchestrator | [WARNING]: Skipped 2026-04-11 07:38:59.424338 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-11 07:38:59.424348 | orchestrator | due to this access issue: 2026-04-11 07:38:59.424365 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-11 07:38:59.424377 | orchestrator | not a directory 2026-04-11 07:38:59.424387 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-11 07:38:59.424398 | orchestrator | 2026-04-11 07:38:59.424409 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-11 07:38:59.424428 | orchestrator | Saturday 11 April 2026 07:38:36 +0000 (0:00:02.463) 0:01:31.278 ******** 2026-04-11 07:38:59.424438 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:38:59.424449 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.424460 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.424470 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.424481 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.424492 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.424503 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.424513 | orchestrator | 2026-04-11 07:38:59.424524 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-11 07:38:59.424535 | orchestrator | Saturday 11 April 2026 07:38:38 +0000 (0:00:01.904) 0:01:33.183 ******** 2026-04-11 07:38:59.424546 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:38:59.424556 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.424567 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.424577 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.424588 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.424598 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.424609 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.424619 | orchestrator | 2026-04-11 07:38:59.424630 | orchestrator | TASK [prometheus : Check for the existence of Prometheus v2 container volume] *** 2026-04-11 07:38:59.424640 | orchestrator | Saturday 11 April 2026 07:38:41 +0000 (0:00:02.409) 0:01:35.593 ******** 2026-04-11 07:38:59.424651 | orchestrator | ok: [testbed-manager] 2026-04-11 07:38:59.424662 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:38:59.424673 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:38:59.424683 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:38:59.424694 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:38:59.424704 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:38:59.424714 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:38:59.424725 | orchestrator | 2026-04-11 07:38:59.424736 | orchestrator | TASK [prometheus : Gracefully stop Prometheus] ********************************* 2026-04-11 07:38:59.424746 | orchestrator | Saturday 11 April 2026 07:38:43 +0000 (0:00:02.348) 0:01:37.942 ******** 2026-04-11 07:38:59.424757 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.424768 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.424779 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.424789 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.424800 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.424811 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.424821 | orchestrator | changed: [testbed-manager] 2026-04-11 07:38:59.424832 | orchestrator | 2026-04-11 07:38:59.424842 | orchestrator | TASK [prometheus : Create new Prometheus v3 volume] **************************** 2026-04-11 07:38:59.424853 | orchestrator | Saturday 11 April 2026 07:38:51 +0000 (0:00:08.049) 0:01:45.991 ******** 2026-04-11 07:38:59.424864 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.424874 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.424885 | orchestrator | changed: [testbed-manager] 2026-04-11 07:38:59.424895 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.424906 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.424916 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.424927 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.424937 | orchestrator | 2026-04-11 07:38:59.424948 | orchestrator | TASK [prometheus : Move _data from old to new volume] ************************** 2026-04-11 07:38:59.424959 | orchestrator | Saturday 11 April 2026 07:38:53 +0000 (0:00:02.152) 0:01:48.144 ******** 2026-04-11 07:38:59.424970 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.424980 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.424991 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.425001 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.425012 | orchestrator | changed: [testbed-manager] 2026-04-11 07:38:59.425022 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.425059 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.425070 | orchestrator | 2026-04-11 07:38:59.425081 | orchestrator | TASK [prometheus : Remove old Prometheus v2 volume] **************************** 2026-04-11 07:38:59.425092 | orchestrator | Saturday 11 April 2026 07:38:55 +0000 (0:00:02.097) 0:01:50.242 ******** 2026-04-11 07:38:59.425102 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:38:59.425113 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:38:59.425123 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:38:59.425134 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:38:59.425144 | orchestrator | changed: [testbed-manager] 2026-04-11 07:38:59.425155 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:38:59.425165 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:38:59.425176 | orchestrator | 2026-04-11 07:38:59.425187 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-11 07:38:59.425197 | orchestrator | Saturday 11 April 2026 07:38:58 +0000 (0:00:02.490) 0:01:52.732 ******** 2026-04-11 07:38:59.425229 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-11 07:39:01.205110 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:39:01.205209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:39:01.205223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:39:01.205233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:39:01.205263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:39:01.205273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:39:01.205282 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-11 07:39:01.205321 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:39:01.205333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:39:01.205343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:01.205353 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:39:01.205368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:01.205377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:39:01.205391 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:39:01.205408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:07.362472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:39:07.362565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:39:07.362594 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:07.362602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:07.362608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-11 07:39:07.362614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:07.362633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:07.362653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:39:07.362661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:39:07.362672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-11 07:39:07.362679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:07.362686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:07.362692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-11 07:39:07.362698 | orchestrator | 2026-04-11 07:39:07.362705 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-11 07:39:07.362713 | orchestrator | Saturday 11 April 2026 07:39:04 +0000 (0:00:06.369) 0:01:59.102 ******** 2026-04-11 07:39:07.362719 | orchestrator | changed: [testbed-manager] => { 2026-04-11 07:39:07.362727 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:39:07.362733 | orchestrator | } 2026-04-11 07:39:07.362739 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:39:07.362744 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:39:07.362750 | orchestrator | } 2026-04-11 07:39:07.362755 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:39:07.362764 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:39:07.362770 | orchestrator | } 2026-04-11 07:39:07.362776 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:39:07.362782 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:39:07.362788 | orchestrator | } 2026-04-11 07:39:07.362793 | orchestrator | changed: [testbed-node-3] => { 2026-04-11 07:39:07.362799 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:39:07.362804 | orchestrator | } 2026-04-11 07:39:07.362810 | orchestrator | changed: [testbed-node-4] => { 2026-04-11 07:39:07.362816 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:39:07.362821 | orchestrator | } 2026-04-11 07:39:07.362827 | orchestrator | changed: [testbed-node-5] => { 2026-04-11 07:39:07.362832 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:39:07.362838 | orchestrator | } 2026-04-11 07:39:07.362843 | orchestrator | 2026-04-11 07:39:07.362849 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:39:07.362855 | orchestrator | Saturday 11 April 2026 07:39:06 +0000 (0:00:02.070) 0:02:01.172 ******** 2026-04-11 07:39:07.362869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:39:07.509910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:07.510000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:07.510118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-11 07:39:07.510137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:39:07.510161 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:39:07.510169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:07.510214 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:39:07.510223 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:39:07.510255 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:07.510264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:39:07.510275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:07.510283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:07.510303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:39:08.261545 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:39:08.261656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:08.261675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:39:08.261688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:08.261700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:08.261711 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:39:08.261724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:39:08.261735 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:39:08.261764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-11 07:39:08.261798 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:39:08.261810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:39:08.261841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:39:08.261854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:39:08.261865 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:39:08.261877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:39:08.261895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:39:08.261915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:39:08.261931 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:39:08.261957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-11 07:39:08.261968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-11 07:39:08.261988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-11 07:41:32.431932 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:41:32.432821 | orchestrator | 2026-04-11 07:41:32.432863 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 07:41:32.432914 | orchestrator | Saturday 11 April 2026 07:39:09 +0000 (0:00:02.996) 0:02:04.169 ******** 2026-04-11 07:41:32.432925 | orchestrator | 2026-04-11 07:41:32.432935 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 07:41:32.432945 | orchestrator | Saturday 11 April 2026 07:39:10 +0000 (0:00:00.460) 0:02:04.630 ******** 2026-04-11 07:41:32.432955 | orchestrator | 2026-04-11 07:41:32.432965 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 07:41:32.432975 | orchestrator | Saturday 11 April 2026 07:39:10 +0000 (0:00:00.456) 0:02:05.086 ******** 2026-04-11 07:41:32.432985 | orchestrator | 2026-04-11 07:41:32.432995 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 07:41:32.433004 | orchestrator | Saturday 11 April 2026 07:39:11 +0000 (0:00:00.460) 0:02:05.547 ******** 2026-04-11 07:41:32.433014 | orchestrator | 2026-04-11 07:41:32.433024 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 07:41:32.433033 | orchestrator | Saturday 11 April 2026 07:39:11 +0000 (0:00:00.433) 0:02:05.981 ******** 2026-04-11 07:41:32.433043 | orchestrator | 2026-04-11 07:41:32.433052 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 07:41:32.433062 | orchestrator | Saturday 11 April 2026 07:39:12 +0000 (0:00:00.439) 0:02:06.421 ******** 2026-04-11 07:41:32.433072 | orchestrator | 2026-04-11 07:41:32.433081 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-11 07:41:32.433091 | orchestrator | Saturday 11 April 2026 07:39:12 +0000 (0:00:00.693) 0:02:07.115 ******** 2026-04-11 07:41:32.433101 | orchestrator | 2026-04-11 07:41:32.433110 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-11 07:41:32.433120 | orchestrator | Saturday 11 April 2026 07:39:13 +0000 (0:00:00.883) 0:02:07.998 ******** 2026-04-11 07:41:32.433130 | orchestrator | changed: [testbed-manager] 2026-04-11 07:41:32.433140 | orchestrator | 2026-04-11 07:41:32.433149 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-11 07:41:32.433159 | orchestrator | Saturday 11 April 2026 07:39:37 +0000 (0:00:23.990) 0:02:31.989 ******** 2026-04-11 07:41:32.433193 | orchestrator | changed: [testbed-node-3] 2026-04-11 07:41:32.433204 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:41:32.433213 | orchestrator | changed: [testbed-manager] 2026-04-11 07:41:32.433223 | orchestrator | changed: [testbed-node-4] 2026-04-11 07:41:32.433232 | orchestrator | changed: [testbed-node-5] 2026-04-11 07:41:32.433242 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:41:32.433252 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:41:32.433261 | orchestrator | 2026-04-11 07:41:32.433271 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-11 07:41:32.433281 | orchestrator | Saturday 11 April 2026 07:39:55 +0000 (0:00:18.185) 0:02:50.174 ******** 2026-04-11 07:41:32.433290 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:41:32.433300 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:41:32.433309 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:41:32.433319 | orchestrator | 2026-04-11 07:41:32.433329 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-11 07:41:32.433339 | orchestrator | Saturday 11 April 2026 07:40:08 +0000 (0:00:12.891) 0:03:03.066 ******** 2026-04-11 07:41:32.433349 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:41:32.433358 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:41:32.433368 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:41:32.433377 | orchestrator | 2026-04-11 07:41:32.433387 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-11 07:41:32.433397 | orchestrator | Saturday 11 April 2026 07:40:21 +0000 (0:00:12.883) 0:03:15.950 ******** 2026-04-11 07:41:32.433406 | orchestrator | changed: [testbed-manager] 2026-04-11 07:41:32.433416 | orchestrator | changed: [testbed-node-3] 2026-04-11 07:41:32.433425 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:41:32.433435 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:41:32.433444 | orchestrator | changed: [testbed-node-4] 2026-04-11 07:41:32.433454 | orchestrator | changed: [testbed-node-5] 2026-04-11 07:41:32.433463 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:41:32.433473 | orchestrator | 2026-04-11 07:41:32.433496 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-11 07:41:32.433507 | orchestrator | Saturday 11 April 2026 07:40:38 +0000 (0:00:16.606) 0:03:32.557 ******** 2026-04-11 07:41:32.433516 | orchestrator | changed: [testbed-manager] 2026-04-11 07:41:32.433526 | orchestrator | 2026-04-11 07:41:32.433535 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-11 07:41:32.433545 | orchestrator | Saturday 11 April 2026 07:40:53 +0000 (0:00:15.199) 0:03:47.756 ******** 2026-04-11 07:41:32.433555 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:41:32.433564 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:41:32.433574 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:41:32.433583 | orchestrator | 2026-04-11 07:41:32.433593 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-11 07:41:32.433603 | orchestrator | Saturday 11 April 2026 07:41:06 +0000 (0:00:12.871) 0:04:00.628 ******** 2026-04-11 07:41:32.433612 | orchestrator | changed: [testbed-manager] 2026-04-11 07:41:32.433622 | orchestrator | 2026-04-11 07:41:32.433631 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-11 07:41:32.433641 | orchestrator | Saturday 11 April 2026 07:41:18 +0000 (0:00:12.492) 0:04:13.121 ******** 2026-04-11 07:41:32.433821 | orchestrator | changed: [testbed-node-3] 2026-04-11 07:41:32.433837 | orchestrator | changed: [testbed-node-4] 2026-04-11 07:41:32.433847 | orchestrator | changed: [testbed-node-5] 2026-04-11 07:41:32.433857 | orchestrator | 2026-04-11 07:41:32.433866 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:41:32.433904 | orchestrator | testbed-manager : ok=28  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 07:41:32.433939 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-11 07:41:32.434067 | orchestrator | testbed-node-1 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-11 07:41:32.434083 | orchestrator | testbed-node-2 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-11 07:41:32.434093 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 07:41:32.434103 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 07:41:32.434113 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 07:41:32.434122 | orchestrator | 2026-04-11 07:41:32.434132 | orchestrator | 2026-04-11 07:41:32.434142 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:41:32.434151 | orchestrator | Saturday 11 April 2026 07:41:32 +0000 (0:00:13.261) 0:04:26.382 ******** 2026-04-11 07:41:32.434161 | orchestrator | =============================================================================== 2026-04-11 07:41:32.434171 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 23.99s 2026-04-11 07:41:32.434180 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.26s 2026-04-11 07:41:32.434190 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 18.19s 2026-04-11 07:41:32.434200 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.61s 2026-04-11 07:41:32.434209 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.20s 2026-04-11 07:41:32.434218 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.26s 2026-04-11 07:41:32.434228 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.89s 2026-04-11 07:41:32.434281 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.88s 2026-04-11 07:41:32.434297 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.87s 2026-04-11 07:41:32.434313 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 12.49s 2026-04-11 07:41:32.434328 | orchestrator | prometheus : Gracefully stop Prometheus --------------------------------- 8.05s 2026-04-11 07:41:32.434344 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.44s 2026-04-11 07:41:32.434359 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 6.37s 2026-04-11 07:41:32.434372 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.24s 2026-04-11 07:41:32.434386 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.93s 2026-04-11 07:41:32.434401 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.84s 2026-04-11 07:41:32.434418 | orchestrator | prometheus : include_tasks ---------------------------------------------- 4.38s 2026-04-11 07:41:32.434433 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.04s 2026-04-11 07:41:32.434449 | orchestrator | prometheus : Flush handlers --------------------------------------------- 3.83s 2026-04-11 07:41:32.434465 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.58s 2026-04-11 07:41:33.934086 | orchestrator | 2026-04-11 07:41:33 | INFO  | Prepare task for execution of grafana. 2026-04-11 07:41:34.008593 | orchestrator | 2026-04-11 07:41:34 | INFO  | Task 3624ebba-5409-4fac-b96f-83dd9a4a7711 (grafana) was prepared for execution. 2026-04-11 07:41:34.008726 | orchestrator | 2026-04-11 07:41:34 | INFO  | It takes a moment until task 3624ebba-5409-4fac-b96f-83dd9a4a7711 (grafana) has been started and output is visible here. 2026-04-11 07:41:48.149974 | orchestrator | 2026-04-11 07:41:48.150179 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:41:48.150200 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-11 07:41:48.150214 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-11 07:41:48.150236 | orchestrator | 2026-04-11 07:41:48.150247 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:41:48.150258 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-11 07:41:48.150269 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-11 07:41:48.150290 | orchestrator | Saturday 11 April 2026 07:41:38 +0000 (0:00:01.228) 0:00:01.228 ******** 2026-04-11 07:41:48.150301 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:41:48.150313 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:41:48.150324 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:41:48.150334 | orchestrator | 2026-04-11 07:41:48.150345 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:41:48.150356 | orchestrator | Saturday 11 April 2026 07:41:39 +0000 (0:00:00.992) 0:00:02.220 ******** 2026-04-11 07:41:48.150367 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-11 07:41:48.150378 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-11 07:41:48.150389 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-11 07:41:48.150400 | orchestrator | 2026-04-11 07:41:48.150410 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-11 07:41:48.150421 | orchestrator | 2026-04-11 07:41:48.150432 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-11 07:41:48.150442 | orchestrator | Saturday 11 April 2026 07:41:40 +0000 (0:00:00.702) 0:00:02.923 ******** 2026-04-11 07:41:48.150453 | orchestrator | included: /ansible/roles/grafana/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:41:48.150466 | orchestrator | 2026-04-11 07:41:48.150479 | orchestrator | TASK [grafana : Checking if Grafana container needs upgrading] ***************** 2026-04-11 07:41:48.150491 | orchestrator | Saturday 11 April 2026 07:41:41 +0000 (0:00:01.182) 0:00:04.106 ******** 2026-04-11 07:41:48.150504 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:41:48.150516 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:41:48.150528 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:41:48.150541 | orchestrator | 2026-04-11 07:41:48.150553 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-11 07:41:48.150565 | orchestrator | Saturday 11 April 2026 07:41:43 +0000 (0:00:02.052) 0:00:06.158 ******** 2026-04-11 07:41:48.150581 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:48.150596 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:48.150660 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:48.150674 | orchestrator | 2026-04-11 07:41:48.150685 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-11 07:41:48.150696 | orchestrator | Saturday 11 April 2026 07:41:44 +0000 (0:00:00.814) 0:00:06.973 ******** 2026-04-11 07:41:48.150707 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:41:48.150718 | orchestrator | 2026-04-11 07:41:48.150729 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-11 07:41:48.150739 | orchestrator | Saturday 11 April 2026 07:41:45 +0000 (0:00:01.176) 0:00:08.149 ******** 2026-04-11 07:41:48.150750 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:41:48.150761 | orchestrator | 2026-04-11 07:41:48.150772 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-11 07:41:48.150782 | orchestrator | Saturday 11 April 2026 07:41:46 +0000 (0:00:01.118) 0:00:09.268 ******** 2026-04-11 07:41:48.150793 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:48.150805 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:48.150816 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:48.150834 | orchestrator | 2026-04-11 07:41:48.150845 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-11 07:41:48.150888 | orchestrator | Saturday 11 April 2026 07:41:47 +0000 (0:00:01.263) 0:00:10.531 ******** 2026-04-11 07:41:48.150907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:41:48.150926 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:41:51.916620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:41:51.916734 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:41:51.916761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:41:51.916781 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:41:51.916802 | orchestrator | 2026-04-11 07:41:51.916824 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-11 07:41:51.916844 | orchestrator | Saturday 11 April 2026 07:41:48 +0000 (0:00:00.534) 0:00:11.066 ******** 2026-04-11 07:41:51.916917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:41:51.916954 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:41:51.916967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:41:51.916978 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:41:51.917025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:41:51.917038 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:41:51.917049 | orchestrator | 2026-04-11 07:41:51.917061 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-11 07:41:51.917072 | orchestrator | Saturday 11 April 2026 07:41:49 +0000 (0:00:00.964) 0:00:12.031 ******** 2026-04-11 07:41:51.917083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:51.917095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:51.917107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:51.917127 | orchestrator | 2026-04-11 07:41:51.917138 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-11 07:41:51.917149 | orchestrator | Saturday 11 April 2026 07:41:50 +0000 (0:00:01.415) 0:00:13.446 ******** 2026-04-11 07:41:51.917161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:41:51.917186 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:42:00.916304 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:42:00.916388 | orchestrator | 2026-04-11 07:42:00.916396 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-11 07:42:00.916401 | orchestrator | Saturday 11 April 2026 07:41:52 +0000 (0:00:01.683) 0:00:15.130 ******** 2026-04-11 07:42:00.916405 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:42:00.916411 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:42:00.916415 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:42:00.916419 | orchestrator | 2026-04-11 07:42:00.916424 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-11 07:42:00.916428 | orchestrator | Saturday 11 April 2026 07:41:52 +0000 (0:00:00.355) 0:00:15.486 ******** 2026-04-11 07:42:00.916512 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 07:42:00.916522 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 07:42:00.916528 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-11 07:42:00.916535 | orchestrator | 2026-04-11 07:42:00.916541 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-11 07:42:00.916547 | orchestrator | Saturday 11 April 2026 07:41:54 +0000 (0:00:01.218) 0:00:16.704 ******** 2026-04-11 07:42:00.916554 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 07:42:00.916558 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 07:42:00.916562 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-11 07:42:00.916566 | orchestrator | 2026-04-11 07:42:00.916570 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-11 07:42:00.916574 | orchestrator | Saturday 11 April 2026 07:41:55 +0000 (0:00:01.247) 0:00:17.952 ******** 2026-04-11 07:42:00.916578 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:42:00.916582 | orchestrator | 2026-04-11 07:42:00.916585 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-11 07:42:00.916589 | orchestrator | Saturday 11 April 2026 07:41:56 +0000 (0:00:00.763) 0:00:18.716 ******** 2026-04-11 07:42:00.916593 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:42:00.916597 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:42:00.916601 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:42:00.916605 | orchestrator | 2026-04-11 07:42:00.916609 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-11 07:42:00.916612 | orchestrator | Saturday 11 April 2026 07:41:57 +0000 (0:00:00.972) 0:00:19.689 ******** 2026-04-11 07:42:00.916616 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:42:00.916620 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:42:00.916624 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:42:00.916628 | orchestrator | 2026-04-11 07:42:00.916631 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-11 07:42:00.916635 | orchestrator | Saturday 11 April 2026 07:41:58 +0000 (0:00:01.671) 0:00:21.361 ******** 2026-04-11 07:42:00.916651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:42:00.916671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:42:00.916681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:42:00.916685 | orchestrator | 2026-04-11 07:42:00.916689 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-11 07:42:00.916693 | orchestrator | Saturday 11 April 2026 07:41:59 +0000 (0:00:01.251) 0:00:22.612 ******** 2026-04-11 07:42:00.916697 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:42:00.916701 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:42:00.916705 | orchestrator | } 2026-04-11 07:42:00.916709 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:42:00.916713 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:42:00.916717 | orchestrator | } 2026-04-11 07:42:00.916721 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:42:00.916725 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:42:00.916728 | orchestrator | } 2026-04-11 07:42:00.916732 | orchestrator | 2026-04-11 07:42:00.916736 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:42:00.916740 | orchestrator | Saturday 11 April 2026 07:42:00 +0000 (0:00:00.640) 0:00:23.253 ******** 2026-04-11 07:42:00.916744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:42:00.916748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:42:00.916752 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:42:00.916756 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:42:00.916767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:43:45.692244 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:43:45.692365 | orchestrator | 2026-04-11 07:43:45.692382 | orchestrator | TASK [grafana : Stopping all Grafana instances but the first node] ************* 2026-04-11 07:43:45.692396 | orchestrator | Saturday 11 April 2026 07:42:01 +0000 (0:00:00.621) 0:00:23.875 ******** 2026-04-11 07:43:45.692408 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:43:45.692420 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:43:45.692432 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:43:45.692443 | orchestrator | 2026-04-11 07:43:45.692455 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-11 07:43:45.692467 | orchestrator | Saturday 11 April 2026 07:42:07 +0000 (0:00:05.940) 0:00:29.815 ******** 2026-04-11 07:43:45.692479 | orchestrator | 2026-04-11 07:43:45.692491 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-11 07:43:45.692502 | orchestrator | Saturday 11 April 2026 07:42:07 +0000 (0:00:00.073) 0:00:29.888 ******** 2026-04-11 07:43:45.692514 | orchestrator | 2026-04-11 07:43:45.692525 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-11 07:43:45.692537 | orchestrator | Saturday 11 April 2026 07:42:07 +0000 (0:00:00.072) 0:00:29.961 ******** 2026-04-11 07:43:45.692548 | orchestrator | 2026-04-11 07:43:45.692560 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-11 07:43:45.692572 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-11 07:43:45.692584 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-11 07:43:45.692607 | orchestrator | Saturday 11 April 2026 07:42:07 +0000 (0:00:00.305) 0:00:30.267 ******** 2026-04-11 07:43:45.692619 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:43:45.692631 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:43:45.692642 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:43:45.692654 | orchestrator | 2026-04-11 07:43:45.692666 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-11 07:43:45.692677 | orchestrator | Saturday 11 April 2026 07:42:42 +0000 (0:00:35.178) 0:01:05.446 ******** 2026-04-11 07:43:45.692689 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:43:45.692701 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:43:45.692713 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-11 07:43:45.692726 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-11 07:43:45.692761 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:43:45.692773 | orchestrator | 2026-04-11 07:43:45.692784 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-11 07:43:45.692795 | orchestrator | Saturday 11 April 2026 07:43:09 +0000 (0:00:26.512) 0:01:31.958 ******** 2026-04-11 07:43:45.692806 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:43:45.692817 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:43:45.692828 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:43:45.692839 | orchestrator | 2026-04-11 07:43:45.692849 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:43:45.692861 | orchestrator | testbed-node-0 : ok=19  changed=6  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:43:45.692872 | orchestrator | testbed-node-1 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:43:45.692908 | orchestrator | testbed-node-2 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:43:45.692920 | orchestrator | 2026-04-11 07:43:45.692930 | orchestrator | 2026-04-11 07:43:45.692941 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:43:45.692952 | orchestrator | Saturday 11 April 2026 07:43:45 +0000 (0:00:36.068) 0:02:08.027 ******** 2026-04-11 07:43:45.692962 | orchestrator | =============================================================================== 2026-04-11 07:43:45.692973 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.07s 2026-04-11 07:43:45.692984 | orchestrator | grafana : Restart first grafana container ------------------------------ 35.18s 2026-04-11 07:43:45.692994 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.51s 2026-04-11 07:43:45.693005 | orchestrator | grafana : Stopping all Grafana instances but the first node ------------- 5.94s 2026-04-11 07:43:45.693016 | orchestrator | grafana : Checking if Grafana container needs upgrading ----------------- 2.05s 2026-04-11 07:43:45.693027 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.68s 2026-04-11 07:43:45.693051 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.67s 2026-04-11 07:43:45.693062 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.42s 2026-04-11 07:43:45.693073 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.26s 2026-04-11 07:43:45.693084 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.25s 2026-04-11 07:43:45.693095 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.25s 2026-04-11 07:43:45.693105 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2026-04-11 07:43:45.693116 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.18s 2026-04-11 07:43:45.693126 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.18s 2026-04-11 07:43:45.693154 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.12s 2026-04-11 07:43:45.693166 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2026-04-11 07:43:45.693177 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.97s 2026-04-11 07:43:45.693188 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.96s 2026-04-11 07:43:45.693198 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.81s 2026-04-11 07:43:45.693209 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.76s 2026-04-11 07:43:45.889722 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/510-clusterapi.sh 2026-04-11 07:43:45.899400 | orchestrator | + set -e 2026-04-11 07:43:45.899473 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 07:43:45.899489 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 07:43:45.899502 | orchestrator | ++ INTERACTIVE=false 2026-04-11 07:43:45.899513 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 07:43:45.899524 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 07:43:45.899536 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 07:43:45.900660 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 07:43:45.907044 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 07:43:45.907084 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 07:43:45.908098 | orchestrator | ++ semver 10.0.0 8.0.0 2026-04-11 07:43:45.977452 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-11 07:43:45.977544 | orchestrator | + osism apply clusterapi 2026-04-11 07:43:47.251840 | orchestrator | 2026-04-11 07:43:47 | INFO  | Prepare task for execution of clusterapi. 2026-04-11 07:43:47.326719 | orchestrator | 2026-04-11 07:43:47 | INFO  | Task 6d59eb28-9a32-4b98-80a5-fb7209318210 (clusterapi) was prepared for execution. 2026-04-11 07:43:47.326813 | orchestrator | 2026-04-11 07:43:47 | INFO  | It takes a moment until task 6d59eb28-9a32-4b98-80a5-fb7209318210 (clusterapi) has been started and output is visible here. 2026-04-11 07:44:59.694757 | orchestrator | 2026-04-11 07:44:59.694876 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-11 07:44:59.694893 | orchestrator | 2026-04-11 07:44:59.694906 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-11 07:44:59.694917 | orchestrator | Saturday 11 April 2026 07:43:53 +0000 (0:00:01.663) 0:00:01.663 ******** 2026-04-11 07:44:59.694929 | orchestrator | included: cert_manager for testbed-manager 2026-04-11 07:44:59.694940 | orchestrator | 2026-04-11 07:44:59.694951 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-11 07:44:59.694963 | orchestrator | Saturday 11 April 2026 07:43:55 +0000 (0:00:01.906) 0:00:03.570 ******** 2026-04-11 07:44:59.694973 | orchestrator | ok: [testbed-manager] 2026-04-11 07:44:59.694985 | orchestrator | 2026-04-11 07:44:59.694996 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-11 07:44:59.695006 | orchestrator | Saturday 11 April 2026 07:44:00 +0000 (0:00:04.547) 0:00:08.118 ******** 2026-04-11 07:44:59.695017 | orchestrator | ok: [testbed-manager] 2026-04-11 07:44:59.695028 | orchestrator | 2026-04-11 07:44:59.695039 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-11 07:44:59.695050 | orchestrator | 2026-04-11 07:44:59.695061 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-11 07:44:59.695072 | orchestrator | Saturday 11 April 2026 07:44:05 +0000 (0:00:05.611) 0:00:13.729 ******** 2026-04-11 07:44:59.695083 | orchestrator | ok: [testbed-manager] 2026-04-11 07:44:59.695094 | orchestrator | 2026-04-11 07:44:59.695104 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-11 07:44:59.695115 | orchestrator | Saturday 11 April 2026 07:44:08 +0000 (0:00:02.380) 0:00:16.110 ******** 2026-04-11 07:44:59.695126 | orchestrator | ok: [testbed-manager] 2026-04-11 07:44:59.695137 | orchestrator | 2026-04-11 07:44:59.695148 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-11 07:44:59.695159 | orchestrator | Saturday 11 April 2026 07:44:09 +0000 (0:00:01.173) 0:00:17.283 ******** 2026-04-11 07:44:59.695170 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:44:59.695182 | orchestrator | 2026-04-11 07:44:59.695193 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-11 07:44:59.695204 | orchestrator | Saturday 11 April 2026 07:44:10 +0000 (0:00:01.147) 0:00:18.431 ******** 2026-04-11 07:44:59.695215 | orchestrator | ok: [testbed-manager] 2026-04-11 07:44:59.695226 | orchestrator | 2026-04-11 07:44:59.695237 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-11 07:44:59.695248 | orchestrator | Saturday 11 April 2026 07:44:55 +0000 (0:00:45.184) 0:01:03.616 ******** 2026-04-11 07:44:59.695259 | orchestrator | changed: [testbed-manager] 2026-04-11 07:44:59.695270 | orchestrator | 2026-04-11 07:44:59.695280 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:44:59.695293 | orchestrator | testbed-manager : ok=7  changed=1  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-11 07:44:59.695304 | orchestrator | 2026-04-11 07:44:59.695315 | orchestrator | 2026-04-11 07:44:59.695326 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:44:59.695355 | orchestrator | Saturday 11 April 2026 07:44:59 +0000 (0:00:03.649) 0:01:07.266 ******** 2026-04-11 07:44:59.695367 | orchestrator | =============================================================================== 2026-04-11 07:44:59.695378 | orchestrator | Upgrade the CAPI management cluster ------------------------------------ 45.18s 2026-04-11 07:44:59.695389 | orchestrator | cert_manager : Deploy cert-manager -------------------------------------- 5.61s 2026-04-11 07:44:59.695400 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 4.55s 2026-04-11 07:44:59.695411 | orchestrator | Install openstack-resource-controller ----------------------------------- 3.65s 2026-04-11 07:44:59.695422 | orchestrator | Get capi-system namespace phase ----------------------------------------- 2.38s 2026-04-11 07:44:59.695455 | orchestrator | Include cert_manager role ----------------------------------------------- 1.91s 2026-04-11 07:44:59.695467 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 1.17s 2026-04-11 07:44:59.695477 | orchestrator | Initialize the CAPI management cluster ---------------------------------- 1.15s 2026-04-11 07:44:59.889662 | orchestrator | + osism apply -a upgrade magnum 2026-04-11 07:45:01.214314 | orchestrator | 2026-04-11 07:45:01 | INFO  | Prepare task for execution of magnum. 2026-04-11 07:45:01.284219 | orchestrator | 2026-04-11 07:45:01 | INFO  | Task d5dbb6b1-25dc-4a05-9c65-100f00a4d199 (magnum) was prepared for execution. 2026-04-11 07:45:01.284312 | orchestrator | 2026-04-11 07:45:01 | INFO  | It takes a moment until task d5dbb6b1-25dc-4a05-9c65-100f00a4d199 (magnum) has been started and output is visible here. 2026-04-11 07:45:21.791339 | orchestrator | 2026-04-11 07:45:21.791461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:45:21.791480 | orchestrator | 2026-04-11 07:45:21.791492 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:45:21.791504 | orchestrator | Saturday 11 April 2026 07:45:06 +0000 (0:00:02.239) 0:00:02.239 ******** 2026-04-11 07:45:21.791515 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:45:21.791527 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:45:21.791538 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:45:21.791549 | orchestrator | 2026-04-11 07:45:21.791560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:45:21.791571 | orchestrator | Saturday 11 April 2026 07:45:08 +0000 (0:00:01.646) 0:00:03.886 ******** 2026-04-11 07:45:21.791582 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-11 07:45:21.791593 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-11 07:45:21.791604 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-11 07:45:21.791615 | orchestrator | 2026-04-11 07:45:21.791626 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-11 07:45:21.791637 | orchestrator | 2026-04-11 07:45:21.791648 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-11 07:45:21.791721 | orchestrator | Saturday 11 April 2026 07:45:10 +0000 (0:00:01.729) 0:00:05.615 ******** 2026-04-11 07:45:21.791734 | orchestrator | included: /ansible/roles/magnum/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:45:21.791746 | orchestrator | 2026-04-11 07:45:21.791757 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-11 07:45:21.791768 | orchestrator | Saturday 11 April 2026 07:45:12 +0000 (0:00:01.866) 0:00:07.481 ******** 2026-04-11 07:45:21.791786 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:21.791820 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:21.791883 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:21.791899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:21.791913 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:21.791927 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:21.791947 | orchestrator | 2026-04-11 07:45:21.791960 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-11 07:45:21.791972 | orchestrator | Saturday 11 April 2026 07:45:15 +0000 (0:00:03.305) 0:00:10.786 ******** 2026-04-11 07:45:21.791985 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:45:21.791999 | orchestrator | 2026-04-11 07:45:21.792012 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-11 07:45:21.792025 | orchestrator | Saturday 11 April 2026 07:45:16 +0000 (0:00:01.202) 0:00:11.989 ******** 2026-04-11 07:45:21.792043 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:45:21.792056 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:45:21.792069 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:45:21.792082 | orchestrator | 2026-04-11 07:45:21.792094 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-11 07:45:21.792107 | orchestrator | Saturday 11 April 2026 07:45:18 +0000 (0:00:01.401) 0:00:13.390 ******** 2026-04-11 07:45:21.792121 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-11 07:45:21.792133 | orchestrator | 2026-04-11 07:45:21.792145 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-11 07:45:21.792158 | orchestrator | Saturday 11 April 2026 07:45:20 +0000 (0:00:02.391) 0:00:15.781 ******** 2026-04-11 07:45:21.792180 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:29.245374 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:29.245505 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:29.245582 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:29.245607 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:29.245720 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:29.245746 | orchestrator | 2026-04-11 07:45:29.245767 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-11 07:45:29.245787 | orchestrator | Saturday 11 April 2026 07:45:24 +0000 (0:00:03.634) 0:00:19.415 ******** 2026-04-11 07:45:29.245807 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:45:29.245846 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:45:29.245879 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:45:29.245900 | orchestrator | 2026-04-11 07:45:29.245919 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-11 07:45:29.245939 | orchestrator | Saturday 11 April 2026 07:45:25 +0000 (0:00:01.355) 0:00:20.771 ******** 2026-04-11 07:45:29.245959 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:45:29.245979 | orchestrator | 2026-04-11 07:45:29.245998 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-11 07:45:29.246012 | orchestrator | Saturday 11 April 2026 07:45:27 +0000 (0:00:01.878) 0:00:22.649 ******** 2026-04-11 07:45:29.246090 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:29.246128 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:29.246144 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:29.246174 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:32.844105 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:32.844266 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:32.844285 | orchestrator | 2026-04-11 07:45:32.844299 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-11 07:45:32.844312 | orchestrator | Saturday 11 April 2026 07:45:30 +0000 (0:00:03.270) 0:00:25.920 ******** 2026-04-11 07:45:32.844342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:32.844357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:32.844392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:32.844412 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:45:32.844426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:32.844437 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:45:32.844454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:32.844467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:32.844478 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:45:32.844489 | orchestrator | 2026-04-11 07:45:32.844501 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-11 07:45:32.844512 | orchestrator | Saturday 11 April 2026 07:45:32 +0000 (0:00:01.863) 0:00:27.783 ******** 2026-04-11 07:45:32.844532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:36.835745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:36.835864 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:45:36.835917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:36.835945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:36.835966 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:45:36.835980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:36.836036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:36.836050 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:45:36.836061 | orchestrator | 2026-04-11 07:45:36.836074 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-11 07:45:36.836086 | orchestrator | Saturday 11 April 2026 07:45:34 +0000 (0:00:02.212) 0:00:29.996 ******** 2026-04-11 07:45:36.836098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:36.836117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:36.836130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:36.836159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:44.946409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:44.946530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:44.946545 | orchestrator | 2026-04-11 07:45:44.946556 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-11 07:45:44.946567 | orchestrator | Saturday 11 April 2026 07:45:38 +0000 (0:00:03.639) 0:00:33.636 ******** 2026-04-11 07:45:44.946580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:44.946613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:44.946671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:44.946689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:44.946700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:44.946709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:45:44.946724 | orchestrator | 2026-04-11 07:45:44.946734 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-11 07:45:44.946742 | orchestrator | Saturday 11 April 2026 07:45:44 +0000 (0:00:06.315) 0:00:39.952 ******** 2026-04-11 07:45:44.946759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:49.228624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:49.228791 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:45:49.228830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:49.228846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:49.228879 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:45:49.228927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:45:49.228963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:45:49.228975 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:45:49.228986 | orchestrator | 2026-04-11 07:45:49.228998 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-11 07:45:49.229011 | orchestrator | Saturday 11 April 2026 07:45:46 +0000 (0:00:02.303) 0:00:42.255 ******** 2026-04-11 07:45:49.229028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:49.229041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:49.229062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-11 07:45:49.229082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:46:17.381967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:46:17.382114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-11 07:46:17.382141 | orchestrator | 2026-04-11 07:46:17.382147 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-11 07:46:17.382153 | orchestrator | Saturday 11 April 2026 07:45:50 +0000 (0:00:03.814) 0:00:46.069 ******** 2026-04-11 07:46:17.382159 | orchestrator | changed: [testbed-node-0] => { 2026-04-11 07:46:17.382165 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:46:17.382171 | orchestrator | } 2026-04-11 07:46:17.382176 | orchestrator | changed: [testbed-node-1] => { 2026-04-11 07:46:17.382181 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:46:17.382186 | orchestrator | } 2026-04-11 07:46:17.382190 | orchestrator | changed: [testbed-node-2] => { 2026-04-11 07:46:17.382195 | orchestrator |  "msg": "Notifying handlers" 2026-04-11 07:46:17.382199 | orchestrator | } 2026-04-11 07:46:17.382204 | orchestrator | 2026-04-11 07:46:17.382209 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-11 07:46:17.382214 | orchestrator | Saturday 11 April 2026 07:45:52 +0000 (0:00:01.346) 0:00:47.416 ******** 2026-04-11 07:46:17.382221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:46:17.382227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:46:17.382232 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:46:17.382250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:46:17.382262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:46:17.382268 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:46:17.382273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-11 07:46:17.382278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-11 07:46:17.382283 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:46:17.382288 | orchestrator | 2026-04-11 07:46:17.382292 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-11 07:46:17.382297 | orchestrator | Saturday 11 April 2026 07:45:54 +0000 (0:00:02.234) 0:00:49.651 ******** 2026-04-11 07:46:17.382302 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:46:17.382307 | orchestrator | 2026-04-11 07:46:17.382311 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-11 07:46:17.382316 | orchestrator | Saturday 11 April 2026 07:46:16 +0000 (0:00:22.640) 0:01:12.292 ******** 2026-04-11 07:46:17.382320 | orchestrator | 2026-04-11 07:46:17.382325 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-11 07:46:17.382333 | orchestrator | Saturday 11 April 2026 07:46:17 +0000 (0:00:00.449) 0:01:12.741 ******** 2026-04-11 07:47:10.583688 | orchestrator | 2026-04-11 07:47:10.583840 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-11 07:47:10.583858 | orchestrator | Saturday 11 April 2026 07:46:17 +0000 (0:00:00.418) 0:01:13.160 ******** 2026-04-11 07:47:10.583871 | orchestrator | 2026-04-11 07:47:10.583882 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-11 07:47:10.583893 | orchestrator | Saturday 11 April 2026 07:46:18 +0000 (0:00:00.917) 0:01:14.077 ******** 2026-04-11 07:47:10.583947 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:47:10.583971 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:47:10.583990 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:47:10.584009 | orchestrator | 2026-04-11 07:47:10.584028 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-11 07:47:10.584048 | orchestrator | Saturday 11 April 2026 07:46:41 +0000 (0:00:22.334) 0:01:36.411 ******** 2026-04-11 07:47:10.584066 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:47:10.584086 | orchestrator | changed: [testbed-node-1] 2026-04-11 07:47:10.584098 | orchestrator | changed: [testbed-node-2] 2026-04-11 07:47:10.584109 | orchestrator | 2026-04-11 07:47:10.584120 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:47:10.584152 | orchestrator | testbed-node-0 : ok=16  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-11 07:47:10.584166 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 07:47:10.584179 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-11 07:47:10.584191 | orchestrator | 2026-04-11 07:47:10.584203 | orchestrator | 2026-04-11 07:47:10.584216 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:47:10.584229 | orchestrator | Saturday 11 April 2026 07:47:10 +0000 (0:00:29.189) 0:02:05.601 ******** 2026-04-11 07:47:10.584242 | orchestrator | =============================================================================== 2026-04-11 07:47:10.584255 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 29.19s 2026-04-11 07:47:10.584267 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 22.64s 2026-04-11 07:47:10.584279 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 22.33s 2026-04-11 07:47:10.584291 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.32s 2026-04-11 07:47:10.584303 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.81s 2026-04-11 07:47:10.584315 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.64s 2026-04-11 07:47:10.584328 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.63s 2026-04-11 07:47:10.584340 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.31s 2026-04-11 07:47:10.584353 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.27s 2026-04-11 07:47:10.584366 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.39s 2026-04-11 07:47:10.584377 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.30s 2026-04-11 07:47:10.584387 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.23s 2026-04-11 07:47:10.584398 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.21s 2026-04-11 07:47:10.584408 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.88s 2026-04-11 07:47:10.584419 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.87s 2026-04-11 07:47:10.584429 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 1.86s 2026-04-11 07:47:10.584441 | orchestrator | magnum : Flush handlers ------------------------------------------------- 1.79s 2026-04-11 07:47:10.584452 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.73s 2026-04-11 07:47:10.584462 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.65s 2026-04-11 07:47:10.584473 | orchestrator | magnum : Set magnum policy file ----------------------------------------- 1.40s 2026-04-11 07:47:11.441792 | orchestrator | ok: Runtime: 3:27:11.394580 2026-04-11 07:47:11.888158 | 2026-04-11 07:47:11.888303 | TASK [Bootstrap services] 2026-04-11 07:47:12.422936 | orchestrator | skipping: Conditional result was False 2026-04-11 07:47:12.447797 | 2026-04-11 07:47:12.447965 | TASK [Run checks after the upgrade] 2026-04-11 07:47:13.140327 | orchestrator | + set -e 2026-04-11 07:47:13.140514 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 07:47:13.140537 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 07:47:13.140595 | orchestrator | ++ INTERACTIVE=false 2026-04-11 07:47:13.140612 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 07:47:13.140625 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 07:47:13.140639 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 07:47:13.140997 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 07:47:13.144520 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 07:47:13.144548 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 07:47:13.144583 | orchestrator | + echo 2026-04-11 07:47:13.144600 | orchestrator | 2026-04-11 07:47:13.144612 | orchestrator | # CHECK 2026-04-11 07:47:13.144623 | orchestrator | 2026-04-11 07:47:13.144644 | orchestrator | + echo '# CHECK' 2026-04-11 07:47:13.144655 | orchestrator | + echo 2026-04-11 07:47:13.144670 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-11 07:47:13.145599 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-11 07:47:13.203264 | orchestrator | 2026-04-11 07:47:13.203380 | orchestrator | ## Containers @ testbed-manager 2026-04-11 07:47:13.203406 | orchestrator | 2026-04-11 07:47:13.203427 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 07:47:13.203446 | orchestrator | + echo 2026-04-11 07:47:13.203466 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-11 07:47:13.203485 | orchestrator | + echo 2026-04-11 07:47:13.203503 | orchestrator | + osism container testbed-manager ps 2026-04-11 07:47:14.671669 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-11 07:47:14.671808 | orchestrator | 63574c8d572c registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328 "dumb-init --single-…" 5 minutes ago Up 5 minutes prometheus_blackbox_exporter 2026-04-11 07:47:14.671836 | orchestrator | 3b8de6116643 registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_alertmanager 2026-04-11 07:47:14.671852 | orchestrator | d6c26c87934f registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-11 07:47:14.671865 | orchestrator | 176577241bbd registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-11 07:47:14.671879 | orchestrator | 212c72959f6c registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_server 2026-04-11 07:47:14.671893 | orchestrator | 56869828f293 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-11 07:47:14.671913 | orchestrator | b5b08594dd89 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-11 07:47:14.671927 | orchestrator | 86ecef06c071 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-11 07:47:14.671970 | orchestrator | a33ba07aa6e9 registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 3 hours ago Up 3 hours openstackclient 2026-04-11 07:47:14.671985 | orchestrator | c6a99ba4013c registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" 3 hours ago Up 3 hours (healthy) manager-inventory_reconciler-1 2026-04-11 07:47:14.671999 | orchestrator | 460238e72ef6 registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) kolla-ansible 2026-04-11 07:47:14.672013 | orchestrator | ad8dddee6fff registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" 3 hours ago Up 3 hours (healthy) osismclient 2026-04-11 07:47:14.672026 | orchestrator | 5132508772ca registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-ansible 2026-04-11 07:47:14.672064 | orchestrator | 63bb4556b132 registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-kubernetes 2026-04-11 07:47:14.672079 | orchestrator | 1ca6c763bf74 registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) ceph-ansible 2026-04-11 07:47:14.672093 | orchestrator | 1ea42dc71a59 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-openstack-1 2026-04-11 07:47:14.672107 | orchestrator | 9ba2ab7a6c0a registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-11 07:47:14.672121 | orchestrator | c41b55b6dfe3 registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" 3 hours ago Up 3 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-11 07:47:14.672135 | orchestrator | 4161a9df8335 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up About an hour (healthy) manager-listener-1 2026-04-11 07:47:14.672150 | orchestrator | b8df411a2a55 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-beat-1 2026-04-11 07:47:14.672164 | orchestrator | 8dedbe48f2bd registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-flower-1 2026-04-11 07:47:14.672177 | orchestrator | 69833ca710a3 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 5 hours ago Up 5 hours cephclient 2026-04-11 07:47:14.672199 | orchestrator | ccfe7e33f249 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 5 hours ago Up 5 hours (healthy) 80/tcp phpmyadmin 2026-04-11 07:47:14.672213 | orchestrator | 8665e275d4a5 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 5 hours ago Up 5 hours (healthy) 8080/tcp homer 2026-04-11 07:47:14.672227 | orchestrator | baa8af4999aa registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 5 hours ago Up 5 hours 80/tcp cgit 2026-04-11 07:47:14.672240 | orchestrator | 803326b0285d registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 6 hours ago Up 6 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-11 07:47:14.672258 | orchestrator | 16ba2d0a41f8 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 6 hours ago Up 3 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-11 07:47:14.672271 | orchestrator | 3c3c01ad4b46 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 6 hours ago Up 3 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-11 07:47:14.672285 | orchestrator | aded781efa65 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 6 hours ago Up 3 hours (healthy) 6379/tcp manager-redis-1 2026-04-11 07:47:14.672306 | orchestrator | ef27e5528914 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 6 hours ago Up 6 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-11 07:47:14.817214 | orchestrator | 2026-04-11 07:47:14.817295 | orchestrator | ## Images @ testbed-manager 2026-04-11 07:47:14.817305 | orchestrator | 2026-04-11 07:47:14.817313 | orchestrator | + echo 2026-04-11 07:47:14.817321 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-11 07:47:14.817328 | orchestrator | + echo 2026-04-11 07:47:14.817335 | orchestrator | + osism container testbed-manager images 2026-04-11 07:47:16.318696 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-11 07:47:16.318810 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 aaf3339aea81 4 hours ago 219MB 2026-04-11 07:47:16.318827 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 27fe929207d0 28 hours ago 246MB 2026-04-11 07:47:16.318839 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20260328.0 38f6ca42e9a0 11 days ago 635MB 2026-04-11 07:47:16.318850 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 13 days ago 590MB 2026-04-11 07:47:16.318867 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 13 days ago 683MB 2026-04-11 07:47:16.318885 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 13 days ago 277MB 2026-04-11 07:47:16.318902 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter 0.25.0.20260328 1bf017fd7bf3 13 days ago 319MB 2026-04-11 07:47:16.318952 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager 0.28.1.20260328 d1986023a383 13 days ago 415MB 2026-04-11 07:47:16.318965 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 13 days ago 368MB 2026-04-11 07:47:16.318976 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-server 3.2.1.20260328 4f5732d5eb69 13 days ago 860MB 2026-04-11 07:47:16.318987 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 13 days ago 317MB 2026-04-11 07:47:16.318998 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20260322.0 3e18c5de9bc5 2 weeks ago 634MB 2026-04-11 07:47:16.319009 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20260322.0 c68c1f5728ae 2 weeks ago 1.24GB 2026-04-11 07:47:16.319020 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20260322.0 f6e7e0d58bb1 2 weeks ago 585MB 2026-04-11 07:47:16.319030 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20260322.0 9806642932fd 2 weeks ago 357MB 2026-04-11 07:47:16.319041 | orchestrator | registry.osism.tech/osism/osism 0.20260320.0 5d0420989a40 3 weeks ago 408MB 2026-04-11 07:47:16.319051 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20260320.0 80b833af5991 3 weeks ago 232MB 2026-04-11 07:47:16.319062 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-11 07:47:16.319086 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-11 07:47:16.319096 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-11 07:47:16.319116 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-11 07:47:16.319127 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-11 07:47:16.319137 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-11 07:47:16.319147 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-11 07:47:16.319158 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-11 07:47:16.319169 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-11 07:47:16.319180 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-11 07:47:16.319191 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-11 07:47:16.319201 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-11 07:47:16.319212 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-11 07:47:16.319222 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-11 07:47:16.319254 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-11 07:47:16.319266 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-11 07:47:16.319277 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-11 07:47:16.319296 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-11 07:47:16.319307 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-11 07:47:16.319318 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-11 07:47:16.319328 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-11 07:47:16.319339 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-11 07:47:16.319349 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-11 07:47:16.319360 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-11 07:47:16.479799 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-11 07:47:16.479920 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-11 07:47:16.532635 | orchestrator | 2026-04-11 07:47:16.532748 | orchestrator | ## Containers @ testbed-node-0 2026-04-11 07:47:16.532772 | orchestrator | 2026-04-11 07:47:16.532791 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 07:47:16.532810 | orchestrator | + echo 2026-04-11 07:47:16.532830 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-11 07:47:16.532851 | orchestrator | + echo 2026-04-11 07:47:16.532870 | orchestrator | + osism container testbed-node-0 ps 2026-04-11 07:47:18.068328 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-11 07:47:18.069241 | orchestrator | b33b5a54dae0 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 22 seconds ago Up 20 seconds (health: starting) magnum_conductor 2026-04-11 07:47:18.069325 | orchestrator | 9429bb2e382c registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 50 seconds ago Up 48 seconds (healthy) magnum_api 2026-04-11 07:47:18.069340 | orchestrator | dfd9edd1c2bb registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 4 minutes ago Up 4 minutes grafana 2026-04-11 07:47:18.069351 | orchestrator | 8d430a6b6bfe registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-11 07:47:18.069365 | orchestrator | 9284f48ea0f2 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-11 07:47:18.069890 | orchestrator | 707b93046725 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_memcached_exporter 2026-04-11 07:47:18.069909 | orchestrator | e14cd3123940 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-11 07:47:18.069920 | orchestrator | 08241d6efb0c registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-11 07:47:18.069931 | orchestrator | fa87591e69a4 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-11 07:47:18.069968 | orchestrator | d337d0b3e50b registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-11 07:47:18.069995 | orchestrator | f1cc469bdc7d registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-11 07:47:18.070006 | orchestrator | 3d3df4596129 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-11 07:47:18.070054 | orchestrator | 22a03766b8fb registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) octavia_worker 2026-04-11 07:47:18.070068 | orchestrator | abef55b69707 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-11 07:47:18.070080 | orchestrator | b665394e3d3b registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-11 07:47:18.070090 | orchestrator | 292df6669615 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes octavia_driver_agent 2026-04-11 07:47:18.070101 | orchestrator | d9774576319d registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-11 07:47:18.070111 | orchestrator | 8870cba3c473 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-11 07:47:18.070122 | orchestrator | 3ced8581091f registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-11 07:47:18.070132 | orchestrator | 9d1b24c536ad registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_evaluator 2026-04-11 07:47:18.070143 | orchestrator | 85e910585a29 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-11 07:47:18.070592 | orchestrator | e5a0f2db7568 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes ceilometer_central 2026-04-11 07:47:18.070617 | orchestrator | 90832c79ff4e registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-11 07:47:18.070628 | orchestrator | 5bf527849473 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-11 07:47:18.070680 | orchestrator | d6ed8af788f4 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-11 07:47:18.070694 | orchestrator | cf1a7c6f8686 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-04-11 07:47:18.070718 | orchestrator | c54d8fd6e663 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-11 07:47:18.070729 | orchestrator | d2130f17a736 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-11 07:47:18.070741 | orchestrator | c814e33ce282 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-11 07:47:18.070752 | orchestrator | 18c581c6154d registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-11 07:47:18.070763 | orchestrator | 9fcb84493457 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-11 07:47:18.070774 | orchestrator | 7a8b79db3833 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-11 07:47:18.070785 | orchestrator | b82db5e8f314 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-11 07:47:18.070796 | orchestrator | b893e22a42c2 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-11 07:47:18.070807 | orchestrator | 50689e049914 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-11 07:47:18.070818 | orchestrator | ce0a6b3cf76c registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 36 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-11 07:47:18.070864 | orchestrator | 59ec143107be registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) glance_api 2026-04-11 07:47:18.070876 | orchestrator | 5da4d03fb035 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) skyline_console 2026-04-11 07:47:18.070887 | orchestrator | 2644fba9d8b3 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) skyline_apiserver 2026-04-11 07:47:18.070898 | orchestrator | f5a93ec5d364 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) horizon 2026-04-11 07:47:18.070920 | orchestrator | 68bb704e518b registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 47 minutes (healthy) nova_novncproxy 2026-04-11 07:47:18.070932 | orchestrator | f03675788410 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_conductor 2026-04-11 07:47:18.070957 | orchestrator | 64d016432f4a registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-11 07:47:18.070968 | orchestrator | 186dadc0f18b registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 47 minutes (healthy) nova_api 2026-04-11 07:47:18.070980 | orchestrator | 7bc5932ebf2a registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 47 minutes (healthy) nova_scheduler 2026-04-11 07:47:18.070990 | orchestrator | fac59a9b8d86 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-11 07:47:18.071001 | orchestrator | e57caee40be0 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-11 07:47:18.071096 | orchestrator | eb8405be47cd registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-11 07:47:18.071109 | orchestrator | e460a5ab76c2 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-11 07:47:18.071120 | orchestrator | 01e36fc21f1f registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-11 07:47:18.071131 | orchestrator | 1a369ed6b0b8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-11 07:47:18.071142 | orchestrator | 1f88a79285a2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-0 2026-04-11 07:47:18.071153 | orchestrator | d4d463bff890 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-0 2026-04-11 07:47:18.071164 | orchestrator | 78942e056931 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-11 07:47:18.071175 | orchestrator | ffcda2358a8b registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-11 07:47:18.071186 | orchestrator | d9532f9326dd registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-11 07:47:18.071197 | orchestrator | 8d8c5eafbc58 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-11 07:47:18.071208 | orchestrator | 67cec2629d89 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-11 07:47:18.071220 | orchestrator | 91150dcef6f3 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-11 07:47:18.071240 | orchestrator | 23f88e96cb2a registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-11 07:47:18.071259 | orchestrator | 9f1394a57205 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-11 07:47:18.071271 | orchestrator | aa9956eabeac registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-11 07:47:18.071287 | orchestrator | 535a0aef7d5c registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-11 07:47:18.071298 | orchestrator | a369d416b36c registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-11 07:47:18.071309 | orchestrator | fdd92d012048 registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-11 07:47:18.071320 | orchestrator | 1635ed46b0df registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-11 07:47:18.071331 | orchestrator | c65960d321eb registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-11 07:47:18.071342 | orchestrator | 8896d44eb963 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-11 07:47:18.071353 | orchestrator | 4b51b868ddb4 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-11 07:47:18.071364 | orchestrator | 11643eee3e12 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-11 07:47:18.071375 | orchestrator | 87856333e944 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-11 07:47:18.071386 | orchestrator | 6588a83e1fd1 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-11 07:47:18.071397 | orchestrator | 742deb3113b6 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-11 07:47:18.230890 | orchestrator | 2026-04-11 07:47:18.231003 | orchestrator | ## Images @ testbed-node-0 2026-04-11 07:47:18.231020 | orchestrator | 2026-04-11 07:47:18.231032 | orchestrator | + echo 2026-04-11 07:47:18.231044 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-11 07:47:18.231056 | orchestrator | + echo 2026-04-11 07:47:18.231068 | orchestrator | + osism container testbed-node-0 images 2026-04-11 07:47:19.822986 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-11 07:47:19.823091 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 13 days ago 288MB 2026-04-11 07:47:19.823107 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 13 days ago 1.54GB 2026-04-11 07:47:19.823142 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 13 days ago 1.57GB 2026-04-11 07:47:19.823154 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 13 days ago 590MB 2026-04-11 07:47:19.823165 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 13 days ago 277MB 2026-04-11 07:47:19.823175 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 13 days ago 1.04GB 2026-04-11 07:47:19.823186 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 13 days ago 427MB 2026-04-11 07:47:19.823196 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 13 days ago 350MB 2026-04-11 07:47:19.823207 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 13 days ago 683MB 2026-04-11 07:47:19.823234 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 13 days ago 277MB 2026-04-11 07:47:19.823245 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 13 days ago 285MB 2026-04-11 07:47:19.823256 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 13 days ago 293MB 2026-04-11 07:47:19.823267 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 13 days ago 293MB 2026-04-11 07:47:19.823278 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 13 days ago 284MB 2026-04-11 07:47:19.823288 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 13 days ago 284MB 2026-04-11 07:47:19.823299 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 13 days ago 1.2GB 2026-04-11 07:47:19.823309 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 13 days ago 463MB 2026-04-11 07:47:19.823320 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 13 days ago 309MB 2026-04-11 07:47:19.823330 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 13 days ago 368MB 2026-04-11 07:47:19.823341 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 13 days ago 303MB 2026-04-11 07:47:19.823351 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 13 days ago 312MB 2026-04-11 07:47:19.823362 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 13 days ago 317MB 2026-04-11 07:47:19.823373 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 13 days ago 301MB 2026-04-11 07:47:19.823383 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 13 days ago 301MB 2026-04-11 07:47:19.823394 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 13 days ago 301MB 2026-04-11 07:47:19.823404 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 13 days ago 301MB 2026-04-11 07:47:19.823415 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 13 days ago 1.09GB 2026-04-11 07:47:19.823433 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 13 days ago 1.06GB 2026-04-11 07:47:19.823444 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 13 days ago 1.05GB 2026-04-11 07:47:19.823474 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 13 days ago 997MB 2026-04-11 07:47:19.823485 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 13 days ago 996MB 2026-04-11 07:47:19.823496 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 13 days ago 1.07GB 2026-04-11 07:47:19.823507 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 13 days ago 1.07GB 2026-04-11 07:47:19.823517 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 13 days ago 1.05GB 2026-04-11 07:47:19.823528 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 13 days ago 1.05GB 2026-04-11 07:47:19.823538 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 13 days ago 1.05GB 2026-04-11 07:47:19.823548 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 13 days ago 996MB 2026-04-11 07:47:19.823582 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 13 days ago 995MB 2026-04-11 07:47:19.823593 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 13 days ago 995MB 2026-04-11 07:47:19.823604 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 13 days ago 995MB 2026-04-11 07:47:19.823620 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 13 days ago 994MB 2026-04-11 07:47:19.823631 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 13 days ago 1.12GB 2026-04-11 07:47:19.823641 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 13 days ago 1.79GB 2026-04-11 07:47:19.823651 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 13 days ago 1.43GB 2026-04-11 07:47:19.823662 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 13 days ago 1.43GB 2026-04-11 07:47:19.823672 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 13 days ago 1.44GB 2026-04-11 07:47:19.823683 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 13 days ago 1.24GB 2026-04-11 07:47:19.823693 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 13 days ago 1.07GB 2026-04-11 07:47:19.823704 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 13 days ago 1.02GB 2026-04-11 07:47:19.823714 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 13 days ago 1GB 2026-04-11 07:47:19.823725 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 13 days ago 1GB 2026-04-11 07:47:19.823735 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 13 days ago 1GB 2026-04-11 07:47:19.823753 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 13 days ago 1.27GB 2026-04-11 07:47:19.823763 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 13 days ago 1.15GB 2026-04-11 07:47:19.823774 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 13 days ago 1.01GB 2026-04-11 07:47:19.823784 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 13 days ago 1GB 2026-04-11 07:47:19.823800 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 13 days ago 1GB 2026-04-11 07:47:19.823811 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 13 days ago 1.01GB 2026-04-11 07:47:19.823821 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 13 days ago 1GB 2026-04-11 07:47:19.823832 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 13 days ago 1GB 2026-04-11 07:47:19.823849 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 13 days ago 1.23GB 2026-04-11 07:47:19.823861 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 13 days ago 1.39GB 2026-04-11 07:47:19.823871 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 13 days ago 1.23GB 2026-04-11 07:47:19.823882 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 13 days ago 1.23GB 2026-04-11 07:47:19.823893 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 13 days ago 1.07GB 2026-04-11 07:47:19.823903 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 13 days ago 1.07GB 2026-04-11 07:47:19.823913 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 13 days ago 1.07GB 2026-04-11 07:47:19.823924 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 13 days ago 1.24GB 2026-04-11 07:47:19.823934 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 13 days ago 301MB 2026-04-11 07:47:19.823944 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-11 07:47:19.823955 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-11 07:47:19.823965 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-11 07:47:19.823976 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-11 07:47:19.823986 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-11 07:47:19.824002 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-11 07:47:19.824013 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-11 07:47:19.824023 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-11 07:47:19.824040 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-11 07:47:19.824051 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-11 07:47:19.824061 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-11 07:47:19.824071 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-11 07:47:19.824082 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-11 07:47:19.824092 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-11 07:47:19.824103 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-11 07:47:19.824113 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-11 07:47:19.824124 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-11 07:47:19.824134 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-11 07:47:19.824145 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-11 07:47:19.824155 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-11 07:47:19.824166 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-11 07:47:19.824181 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-11 07:47:19.824192 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-11 07:47:19.824203 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-11 07:47:19.824213 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-11 07:47:19.824224 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-11 07:47:19.824234 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-11 07:47:19.824245 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-11 07:47:19.824255 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-11 07:47:19.824266 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-11 07:47:19.824276 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-11 07:47:19.824287 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-11 07:47:19.824297 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-11 07:47:19.824314 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-11 07:47:19.824324 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-11 07:47:19.824335 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-11 07:47:19.824345 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-11 07:47:19.824356 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-11 07:47:19.824366 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-11 07:47:19.824376 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-11 07:47:19.824387 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-11 07:47:19.824397 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-11 07:47:19.824407 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-11 07:47:19.824418 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-11 07:47:19.824438 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-11 07:47:19.824449 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-11 07:47:19.824460 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-11 07:47:19.824471 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-11 07:47:19.824481 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-11 07:47:19.824491 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-11 07:47:19.824502 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-11 07:47:19.824518 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-11 07:47:19.824529 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-11 07:47:19.824540 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-11 07:47:19.824550 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-11 07:47:19.824580 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-11 07:47:19.824591 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-11 07:47:19.824601 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-11 07:47:19.824618 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-11 07:47:19.824629 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-11 07:47:19.824647 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-11 07:47:19.824665 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-11 07:47:19.824691 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-11 07:47:19.824712 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-11 07:47:19.824729 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-11 07:47:19.824746 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-11 07:47:19.824764 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-11 07:47:19.824781 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-11 07:47:19.824799 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-11 07:47:19.995065 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-11 07:47:19.995346 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-11 07:47:20.042100 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 07:47:20.042197 | orchestrator | + echo 2026-04-11 07:47:20.042213 | orchestrator | 2026-04-11 07:47:20.042224 | orchestrator | ## Containers @ testbed-node-1 2026-04-11 07:47:20.042237 | orchestrator | 2026-04-11 07:47:20.042249 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-11 07:47:20.042262 | orchestrator | + echo 2026-04-11 07:47:20.042274 | orchestrator | + osism container testbed-node-1 ps 2026-04-11 07:47:21.565935 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-11 07:47:21.566126 | orchestrator | 333b236dee52 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 24 seconds ago Up 23 seconds (health: starting) magnum_conductor 2026-04-11 07:47:21.566147 | orchestrator | 8f44feb30e0f registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 43 seconds ago Up 41 seconds (healthy) magnum_api 2026-04-11 07:47:21.566158 | orchestrator | f9d52c0bb480 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-11 07:47:21.566169 | orchestrator | b76d7786bfc2 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-11 07:47:21.566183 | orchestrator | 1bbc6fb70965 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-11 07:47:21.566194 | orchestrator | 31d93fc8dadc registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-11 07:47:21.566205 | orchestrator | edaf5e209b93 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-11 07:47:21.566246 | orchestrator | 043b0668f952 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-11 07:47:21.566258 | orchestrator | 007b78dbd9ca registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-11 07:47:21.566285 | orchestrator | e3679a3ea16d registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-11 07:47:21.566297 | orchestrator | 40844918c264 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-11 07:47:21.566308 | orchestrator | f41a99158ec3 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_api 2026-04-11 07:47:21.566319 | orchestrator | 626014e68d8d registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-11 07:47:21.566330 | orchestrator | 7230a3f4bff9 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-11 07:47:21.566341 | orchestrator | 23f351321e8b registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-11 07:47:21.566352 | orchestrator | 6e258825e86b registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes octavia_driver_agent 2026-04-11 07:47:21.566363 | orchestrator | 0f2b79eb0aad registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-11 07:47:21.566391 | orchestrator | 6b556e2abdf6 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-11 07:47:21.566403 | orchestrator | 075157b84b63 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-11 07:47:21.566413 | orchestrator | 30127436084a registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_evaluator 2026-04-11 07:47:21.566467 | orchestrator | 4297675bd973 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-11 07:47:21.566481 | orchestrator | 071354480d18 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes ceilometer_central 2026-04-11 07:47:21.566494 | orchestrator | 285c349b1632 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-11 07:47:21.566551 | orchestrator | a4ceb7490091 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-11 07:47:21.566610 | orchestrator | 680191a449f0 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-11 07:47:21.566803 | orchestrator | 6270d8a6dca8 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-04-11 07:47:21.566821 | orchestrator | 6eea7b23ff7c registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-11 07:47:21.566833 | orchestrator | 55ed636ae06e registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-11 07:47:21.566845 | orchestrator | fb776af94ffb registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-11 07:47:21.566856 | orchestrator | 6a7e0ee7b629 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-11 07:47:21.566867 | orchestrator | 67abb1d238f6 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-11 07:47:21.566877 | orchestrator | ac035f458c28 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-11 07:47:21.566888 | orchestrator | fe80f2e4c350 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-11 07:47:21.566899 | orchestrator | da0633ae0784 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-11 07:47:21.566909 | orchestrator | f0870e934ad1 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-11 07:47:21.566920 | orchestrator | d6b5a6ef9cf3 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-11 07:47:21.566931 | orchestrator | 178357620b7b registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-11 07:47:21.566942 | orchestrator | 66e02d507611 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) skyline_console 2026-04-11 07:47:21.566953 | orchestrator | 4b28935360ec registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-11 07:47:21.566963 | orchestrator | 1e8512a77583 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) horizon 2026-04-11 07:47:21.566990 | orchestrator | bf096b1f2f4e registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_novncproxy 2026-04-11 07:47:21.567001 | orchestrator | b5cfdf60d15b registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_conductor 2026-04-11 07:47:21.567012 | orchestrator | cec5ad522cc7 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-11 07:47:21.567022 | orchestrator | 3bf0565ac7e0 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 47 minutes (healthy) nova_api 2026-04-11 07:47:21.567041 | orchestrator | 01ae7caec1af registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 47 minutes (healthy) nova_scheduler 2026-04-11 07:47:21.567052 | orchestrator | fb1f8b649912 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-11 07:47:21.567063 | orchestrator | e34f991565fb registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-11 07:47:21.567074 | orchestrator | c4c83cf1b82f registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-11 07:47:21.567085 | orchestrator | d6ad3dc5d3f6 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-11 07:47:21.567096 | orchestrator | 3d82b616ab2a registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-11 07:47:21.567107 | orchestrator | 6de21e28f23f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-11 07:47:21.567118 | orchestrator | c8a16bdd7bc9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-1 2026-04-11 07:47:21.567129 | orchestrator | 26fb3b048944 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-1 2026-04-11 07:47:21.567140 | orchestrator | cf87e8e15a28 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-11 07:47:21.567150 | orchestrator | 7a7146c5f875 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-11 07:47:21.567161 | orchestrator | 943736343e49 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-11 07:47:21.567172 | orchestrator | 9c2bcf845047 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-11 07:47:21.567189 | orchestrator | 7abce8e743e9 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-11 07:47:21.567200 | orchestrator | f9ab6c7d220d registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-11 07:47:21.567211 | orchestrator | 04f38ef298b5 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-11 07:47:21.567222 | orchestrator | 3c64d91b8729 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-11 07:47:21.567232 | orchestrator | a38b78ed6a21 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-11 07:47:21.567243 | orchestrator | 4e38563c54bc registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-11 07:47:21.567260 | orchestrator | f2cba3e9297c registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-11 07:47:21.567271 | orchestrator | ebc862244e8a registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-11 07:47:21.567282 | orchestrator | 354825e4573f registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-11 07:47:21.567293 | orchestrator | 3b70278e7a14 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-11 07:47:21.567304 | orchestrator | a2238bd72073 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-11 07:47:21.567314 | orchestrator | cfa8360be7df registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-11 07:47:21.567325 | orchestrator | f0fc81d53ba9 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-11 07:47:21.567336 | orchestrator | 4a97dd11db74 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-11 07:47:21.567347 | orchestrator | 7328601e7744 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-11 07:47:21.567358 | orchestrator | 947d21357f04 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-11 07:47:21.708419 | orchestrator | 2026-04-11 07:47:21.708536 | orchestrator | ## Images @ testbed-node-1 2026-04-11 07:47:21.708590 | orchestrator | 2026-04-11 07:47:21.708645 | orchestrator | + echo 2026-04-11 07:47:21.708665 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-11 07:47:21.708685 | orchestrator | + echo 2026-04-11 07:47:21.708702 | orchestrator | + osism container testbed-node-1 images 2026-04-11 07:47:23.333787 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-11 07:47:23.333921 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 13 days ago 288MB 2026-04-11 07:47:23.333946 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 13 days ago 1.54GB 2026-04-11 07:47:23.333986 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 13 days ago 1.57GB 2026-04-11 07:47:23.334003 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 13 days ago 590MB 2026-04-11 07:47:23.334081 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 13 days ago 277MB 2026-04-11 07:47:23.334101 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 13 days ago 1.04GB 2026-04-11 07:47:23.334124 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 13 days ago 350MB 2026-04-11 07:47:23.334141 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 13 days ago 427MB 2026-04-11 07:47:23.334157 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 13 days ago 683MB 2026-04-11 07:47:23.334174 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 13 days ago 277MB 2026-04-11 07:47:23.334189 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 13 days ago 285MB 2026-04-11 07:47:23.334206 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 13 days ago 293MB 2026-04-11 07:47:23.334223 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 13 days ago 293MB 2026-04-11 07:47:23.334239 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 13 days ago 284MB 2026-04-11 07:47:23.334255 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 13 days ago 284MB 2026-04-11 07:47:23.334272 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 13 days ago 1.2GB 2026-04-11 07:47:23.334289 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 13 days ago 463MB 2026-04-11 07:47:23.334305 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 13 days ago 309MB 2026-04-11 07:47:23.334321 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 13 days ago 368MB 2026-04-11 07:47:23.334337 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 13 days ago 303MB 2026-04-11 07:47:23.334353 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 13 days ago 312MB 2026-04-11 07:47:23.334370 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 13 days ago 317MB 2026-04-11 07:47:23.334459 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 13 days ago 301MB 2026-04-11 07:47:23.334505 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 13 days ago 301MB 2026-04-11 07:47:23.334609 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 13 days ago 301MB 2026-04-11 07:47:23.334626 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 13 days ago 301MB 2026-04-11 07:47:23.334643 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 13 days ago 1.09GB 2026-04-11 07:47:23.334660 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 13 days ago 1.06GB 2026-04-11 07:47:23.334675 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 13 days ago 1.05GB 2026-04-11 07:47:23.334717 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 13 days ago 997MB 2026-04-11 07:47:23.334728 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 13 days ago 996MB 2026-04-11 07:47:23.334737 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 13 days ago 1.07GB 2026-04-11 07:47:23.334746 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 13 days ago 1.07GB 2026-04-11 07:47:23.334756 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 13 days ago 1.05GB 2026-04-11 07:47:23.334765 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 13 days ago 1.05GB 2026-04-11 07:47:23.334774 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 13 days ago 1.05GB 2026-04-11 07:47:23.334784 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 13 days ago 996MB 2026-04-11 07:47:23.334800 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 13 days ago 995MB 2026-04-11 07:47:23.334816 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 13 days ago 995MB 2026-04-11 07:47:23.334831 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 13 days ago 995MB 2026-04-11 07:47:23.334846 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 13 days ago 994MB 2026-04-11 07:47:23.334861 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 13 days ago 1.12GB 2026-04-11 07:47:23.334875 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 13 days ago 1.79GB 2026-04-11 07:47:23.334891 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 13 days ago 1.43GB 2026-04-11 07:47:23.334908 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 13 days ago 1.43GB 2026-04-11 07:47:23.334925 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 13 days ago 1.44GB 2026-04-11 07:47:23.334942 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 13 days ago 1.24GB 2026-04-11 07:47:23.334958 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 13 days ago 1.07GB 2026-04-11 07:47:23.334990 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 13 days ago 1.02GB 2026-04-11 07:47:23.335008 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 13 days ago 1GB 2026-04-11 07:47:23.335023 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 13 days ago 1GB 2026-04-11 07:47:23.335036 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 13 days ago 1GB 2026-04-11 07:47:23.335045 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 13 days ago 1.27GB 2026-04-11 07:47:23.335055 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 13 days ago 1.15GB 2026-04-11 07:47:23.335064 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 13 days ago 1.01GB 2026-04-11 07:47:23.335074 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 13 days ago 1GB 2026-04-11 07:47:23.335083 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 13 days ago 1GB 2026-04-11 07:47:23.335092 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 13 days ago 1.01GB 2026-04-11 07:47:23.335101 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 13 days ago 1GB 2026-04-11 07:47:23.335111 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 13 days ago 1GB 2026-04-11 07:47:23.335130 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 13 days ago 1.23GB 2026-04-11 07:47:23.335140 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 13 days ago 1.39GB 2026-04-11 07:47:23.335149 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 13 days ago 1.23GB 2026-04-11 07:47:23.335159 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 13 days ago 1.23GB 2026-04-11 07:47:23.335178 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 13 days ago 1.07GB 2026-04-11 07:47:23.335188 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 13 days ago 1.07GB 2026-04-11 07:47:23.335197 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 13 days ago 1.07GB 2026-04-11 07:47:23.335211 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 13 days ago 1.24GB 2026-04-11 07:47:23.335227 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 13 days ago 301MB 2026-04-11 07:47:23.335243 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-11 07:47:23.335257 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-11 07:47:23.335273 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-11 07:47:23.335288 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-11 07:47:23.335313 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-11 07:47:23.335327 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-11 07:47:23.335338 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-11 07:47:23.335351 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-11 07:47:23.335365 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-11 07:47:23.335377 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-11 07:47:23.335390 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-11 07:47:23.335403 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-11 07:47:23.335421 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-11 07:47:23.335435 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-11 07:47:23.335450 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-11 07:47:23.335462 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-11 07:47:23.335476 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-11 07:47:23.335490 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-11 07:47:23.335503 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-11 07:47:23.335516 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-11 07:47:23.335529 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-11 07:47:23.335590 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-11 07:47:23.335607 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-11 07:47:23.335620 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-11 07:47:23.335634 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-11 07:47:23.335647 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-11 07:47:23.335660 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-11 07:47:23.335674 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-11 07:47:23.335695 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-11 07:47:23.335704 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-11 07:47:23.335719 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-11 07:47:23.335727 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-11 07:47:23.335735 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-11 07:47:23.335743 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-11 07:47:23.335750 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-11 07:47:23.335758 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-11 07:47:23.335766 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-11 07:47:23.335774 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-11 07:47:23.335781 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-11 07:47:23.335789 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-11 07:47:23.335796 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-11 07:47:23.335804 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-11 07:47:23.335812 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-11 07:47:23.335819 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-11 07:47:23.335827 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-11 07:47:23.335835 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-11 07:47:23.335843 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-11 07:47:23.335850 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-11 07:47:23.335858 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-11 07:47:23.335866 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-11 07:47:23.335873 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-11 07:47:23.335887 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-11 07:47:23.335895 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-11 07:47:23.335903 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-11 07:47:23.335911 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-11 07:47:23.335924 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-11 07:47:23.335931 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-11 07:47:23.335939 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-11 07:47:23.335947 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-11 07:47:23.335954 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-11 07:47:23.335962 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-11 07:47:23.335970 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-11 07:47:23.335977 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-11 07:47:23.335985 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-11 07:47:23.335993 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-11 07:47:23.336005 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-11 07:47:23.336012 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-11 07:47:23.336020 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-11 07:47:23.336028 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-11 07:47:23.482135 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-11 07:47:23.482236 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-11 07:47:23.545776 | orchestrator | 2026-04-11 07:47:23.545856 | orchestrator | ## Containers @ testbed-node-2 2026-04-11 07:47:23.545865 | orchestrator | 2026-04-11 07:47:23.545872 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 07:47:23.545877 | orchestrator | + echo 2026-04-11 07:47:23.545883 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-11 07:47:23.545889 | orchestrator | + echo 2026-04-11 07:47:23.545894 | orchestrator | + osism container testbed-node-2 ps 2026-04-11 07:47:25.102610 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-11 07:47:25.102711 | orchestrator | 97550445b74f registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 18 seconds ago Up 16 seconds (health: starting) magnum_conductor 2026-04-11 07:47:25.102729 | orchestrator | f606e4ed6373 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 46 seconds ago Up 45 seconds (healthy) magnum_api 2026-04-11 07:47:25.102741 | orchestrator | e20a005dfdee registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-11 07:47:25.102752 | orchestrator | 49644186a258 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-11 07:47:25.102787 | orchestrator | 24a9567140a8 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-11 07:47:25.102798 | orchestrator | 02787e97e700 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-11 07:47:25.102810 | orchestrator | aff13b045515 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-11 07:47:25.102821 | orchestrator | d9da635062c1 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-11 07:47:25.102831 | orchestrator | 9c30444a093c registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-11 07:47:25.102846 | orchestrator | c692ebbe3a8b registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-11 07:47:25.102884 | orchestrator | 7edda677e8db registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-11 07:47:25.102902 | orchestrator | 3fabc9d5c457 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-11 07:47:25.102920 | orchestrator | 2dbead14f395 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-11 07:47:25.102939 | orchestrator | 9299c5fddd7e registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-11 07:47:25.102956 | orchestrator | 575715bfc1d0 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-11 07:47:25.102975 | orchestrator | 29140530adcd registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes octavia_driver_agent 2026-04-11 07:47:25.102993 | orchestrator | 05a13bbf199a registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-11 07:47:25.103101 | orchestrator | a87b6f6d3678 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-11 07:47:25.103120 | orchestrator | 2658a459a181 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-11 07:47:25.103132 | orchestrator | 006b55ba98fe registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_evaluator 2026-04-11 07:47:25.103145 | orchestrator | f690986c987d registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-11 07:47:25.103168 | orchestrator | f38b2d8f0b5a registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes ceilometer_central 2026-04-11 07:47:25.103181 | orchestrator | f951d695d514 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-11 07:47:25.103193 | orchestrator | f2903c1e5b48 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-11 07:47:25.103206 | orchestrator | dfd680b32a52 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-11 07:47:25.103218 | orchestrator | 252271eb882e registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-04-11 07:47:25.103230 | orchestrator | 8d1e9fddaf39 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-11 07:47:25.103242 | orchestrator | 7836df25bac5 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-11 07:47:25.103255 | orchestrator | ad485e21d829 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-11 07:47:25.103267 | orchestrator | c33788759eaf registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-11 07:47:25.103279 | orchestrator | ba86e6fe4c23 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-11 07:47:25.103292 | orchestrator | c9c48b013cd8 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-11 07:47:25.103304 | orchestrator | ad40c9f9d355 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-11 07:47:25.103316 | orchestrator | 9e63b273a585 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-11 07:47:25.103328 | orchestrator | e09ef6c1b88a registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-11 07:47:25.103340 | orchestrator | 11a2355ff3ba registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-11 07:47:25.103360 | orchestrator | f2beaacdd5ac registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-11 07:47:25.103664 | orchestrator | 8761b88fb7b7 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) skyline_console 2026-04-11 07:47:25.103705 | orchestrator | f099f894a5a6 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 43 minutes (healthy) skyline_apiserver 2026-04-11 07:47:25.103717 | orchestrator | 97af6fe783ae registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) horizon 2026-04-11 07:47:25.103727 | orchestrator | 14e719d25819 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_novncproxy 2026-04-11 07:47:25.103738 | orchestrator | 7270294a29b0 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 48 minutes (healthy) nova_conductor 2026-04-11 07:47:25.103749 | orchestrator | 0c7990e09945 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-11 07:47:25.103760 | orchestrator | a61e28416532 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 47 minutes (healthy) nova_api 2026-04-11 07:47:25.103782 | orchestrator | 8cf095d40d8c registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 47 minutes (healthy) nova_scheduler 2026-04-11 07:47:25.103793 | orchestrator | 7643fd253929 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-11 07:47:25.103804 | orchestrator | 7b08b7bc7c5a registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-11 07:47:25.103815 | orchestrator | b51d68cc597c registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-11 07:47:25.103835 | orchestrator | 72e848b440cc registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-11 07:47:25.103854 | orchestrator | 30be51ed5c80 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-11 07:47:25.103871 | orchestrator | 6a4ed697163b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-11 07:47:25.103889 | orchestrator | fa286467f73e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-2 2026-04-11 07:47:25.103908 | orchestrator | 5c0324173fbf registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-2 2026-04-11 07:47:25.103926 | orchestrator | 3d28fdda7179 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-11 07:47:25.103957 | orchestrator | a03d6e572207 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-11 07:47:25.103977 | orchestrator | 3ae1bef7e8a4 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-11 07:47:25.104008 | orchestrator | 99e11fd34351 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-11 07:47:25.104027 | orchestrator | 689b7cf57918 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-11 07:47:25.104047 | orchestrator | 2f5edba2e5da registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-11 07:47:25.104064 | orchestrator | 81c8ea1fac45 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-11 07:47:25.104084 | orchestrator | d3be1591c404 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-11 07:47:25.104102 | orchestrator | 9f1051bdde08 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-11 07:47:25.104120 | orchestrator | a64a3164e0d4 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-11 07:47:25.104138 | orchestrator | 2b88e3e0f0a4 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-11 07:47:25.104155 | orchestrator | d731c1f419cd registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-11 07:47:25.104174 | orchestrator | 8cac076a91e4 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-11 07:47:25.104195 | orchestrator | e9d59a80b5bf registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-11 07:47:25.104214 | orchestrator | 51c3692d7867 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-11 07:47:25.104242 | orchestrator | 7de29bdccf57 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-11 07:47:25.104262 | orchestrator | fd931a261629 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-11 07:47:25.104282 | orchestrator | 33a3d039e5e7 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-11 07:47:25.104318 | orchestrator | 7fe6f2eadc94 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-11 07:47:25.104339 | orchestrator | f9baf051795f registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-11 07:47:25.265079 | orchestrator | 2026-04-11 07:47:25.265163 | orchestrator | ## Images @ testbed-node-2 2026-04-11 07:47:25.265175 | orchestrator | 2026-04-11 07:47:25.265185 | orchestrator | + echo 2026-04-11 07:47:25.265194 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-11 07:47:25.265203 | orchestrator | + echo 2026-04-11 07:47:25.265212 | orchestrator | + osism container testbed-node-2 images 2026-04-11 07:47:26.887934 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-11 07:47:26.888034 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 13 days ago 288MB 2026-04-11 07:47:26.888048 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 13 days ago 1.54GB 2026-04-11 07:47:26.888060 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 13 days ago 1.57GB 2026-04-11 07:47:26.888071 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 13 days ago 590MB 2026-04-11 07:47:26.888082 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 13 days ago 277MB 2026-04-11 07:47:26.888093 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 13 days ago 1.04GB 2026-04-11 07:47:26.888103 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 13 days ago 350MB 2026-04-11 07:47:26.888114 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 13 days ago 427MB 2026-04-11 07:47:26.888125 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 13 days ago 683MB 2026-04-11 07:47:26.888135 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 13 days ago 277MB 2026-04-11 07:47:26.888146 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 13 days ago 285MB 2026-04-11 07:47:26.888157 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 13 days ago 293MB 2026-04-11 07:47:26.888167 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 13 days ago 293MB 2026-04-11 07:47:26.888178 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 13 days ago 284MB 2026-04-11 07:47:26.888189 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 13 days ago 284MB 2026-04-11 07:47:26.888199 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 13 days ago 1.2GB 2026-04-11 07:47:26.888210 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 13 days ago 463MB 2026-04-11 07:47:26.888221 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 13 days ago 309MB 2026-04-11 07:47:26.888231 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 13 days ago 368MB 2026-04-11 07:47:26.888242 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 13 days ago 303MB 2026-04-11 07:47:26.888274 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 13 days ago 312MB 2026-04-11 07:47:26.888287 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 13 days ago 317MB 2026-04-11 07:47:26.888298 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 13 days ago 301MB 2026-04-11 07:47:26.888309 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 13 days ago 301MB 2026-04-11 07:47:26.888319 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 13 days ago 301MB 2026-04-11 07:47:26.888330 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 13 days ago 301MB 2026-04-11 07:47:26.888341 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 13 days ago 1.09GB 2026-04-11 07:47:26.888352 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 13 days ago 1.06GB 2026-04-11 07:47:26.888362 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 13 days ago 1.05GB 2026-04-11 07:47:26.888390 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 13 days ago 997MB 2026-04-11 07:47:26.888402 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 13 days ago 996MB 2026-04-11 07:47:26.888413 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 13 days ago 1.07GB 2026-04-11 07:47:26.888424 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 13 days ago 1.07GB 2026-04-11 07:47:26.888435 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 13 days ago 1.05GB 2026-04-11 07:47:26.888445 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 13 days ago 1.05GB 2026-04-11 07:47:26.888456 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 13 days ago 1.05GB 2026-04-11 07:47:26.888468 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 13 days ago 996MB 2026-04-11 07:47:26.888481 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 13 days ago 995MB 2026-04-11 07:47:26.888494 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 13 days ago 995MB 2026-04-11 07:47:26.888506 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 13 days ago 995MB 2026-04-11 07:47:26.888518 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 13 days ago 994MB 2026-04-11 07:47:26.888547 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 13 days ago 1.12GB 2026-04-11 07:47:26.888633 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 13 days ago 1.79GB 2026-04-11 07:47:26.888647 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 13 days ago 1.43GB 2026-04-11 07:47:26.888660 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 13 days ago 1.43GB 2026-04-11 07:47:26.888695 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 13 days ago 1.44GB 2026-04-11 07:47:26.888708 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 13 days ago 1.24GB 2026-04-11 07:47:26.888721 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 13 days ago 1.07GB 2026-04-11 07:47:26.888733 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 13 days ago 1.02GB 2026-04-11 07:47:26.888745 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 13 days ago 1GB 2026-04-11 07:47:26.888758 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 13 days ago 1GB 2026-04-11 07:47:26.888775 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 13 days ago 1GB 2026-04-11 07:47:26.888788 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 13 days ago 1.27GB 2026-04-11 07:47:26.888801 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 13 days ago 1.15GB 2026-04-11 07:47:26.888813 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 13 days ago 1.01GB 2026-04-11 07:47:26.888826 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 13 days ago 1GB 2026-04-11 07:47:26.888836 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 13 days ago 1GB 2026-04-11 07:47:26.888847 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 13 days ago 1.01GB 2026-04-11 07:47:26.888858 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 13 days ago 1GB 2026-04-11 07:47:26.888869 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 13 days ago 1GB 2026-04-11 07:47:26.888887 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 13 days ago 1.23GB 2026-04-11 07:47:26.888898 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 13 days ago 1.39GB 2026-04-11 07:47:26.888909 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 13 days ago 1.23GB 2026-04-11 07:47:26.888920 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 13 days ago 1.23GB 2026-04-11 07:47:26.888930 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 13 days ago 1.07GB 2026-04-11 07:47:26.888941 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 13 days ago 1.07GB 2026-04-11 07:47:26.888952 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 13 days ago 1.07GB 2026-04-11 07:47:26.888962 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 13 days ago 1.24GB 2026-04-11 07:47:26.888973 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 13 days ago 301MB 2026-04-11 07:47:26.888984 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-11 07:47:26.889002 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-11 07:47:26.889013 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-11 07:47:26.889023 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-11 07:47:26.889034 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-11 07:47:26.889045 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-11 07:47:26.889056 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-11 07:47:26.889066 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-11 07:47:26.889077 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-11 07:47:26.889088 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-11 07:47:26.889099 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-11 07:47:26.889109 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-11 07:47:26.889120 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-11 07:47:26.889135 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-11 07:47:26.889146 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-11 07:47:26.889157 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-11 07:47:26.889168 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-11 07:47:26.889178 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-11 07:47:26.889189 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-11 07:47:26.889200 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-11 07:47:26.889210 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-11 07:47:26.889225 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-11 07:47:26.889236 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-11 07:47:26.889247 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-11 07:47:26.889258 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-11 07:47:26.889268 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-11 07:47:26.889286 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-11 07:47:26.889297 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-11 07:47:26.889307 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-11 07:47:26.889318 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-11 07:47:26.889329 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-11 07:47:26.889340 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-11 07:47:26.889350 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-11 07:47:26.889361 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-11 07:47:26.889372 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-11 07:47:26.889382 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-11 07:47:26.889393 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-11 07:47:26.889404 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-11 07:47:26.889414 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-11 07:47:26.889425 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-11 07:47:26.889437 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-11 07:47:26.889447 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-11 07:47:26.889458 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-11 07:47:26.889469 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-11 07:47:26.889479 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-11 07:47:26.889490 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-11 07:47:26.889501 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-11 07:47:26.889511 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-11 07:47:26.889522 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-11 07:47:26.889533 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-11 07:47:26.889543 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-11 07:47:26.889588 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-11 07:47:26.889599 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-11 07:47:26.889610 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-11 07:47:26.889621 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-11 07:47:26.889631 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-11 07:47:26.889642 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-11 07:47:26.889653 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-11 07:47:26.889664 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-11 07:47:26.889674 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-11 07:47:26.889685 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-11 07:47:26.889696 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-11 07:47:26.889707 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-11 07:47:26.889717 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-11 07:47:26.889728 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-11 07:47:26.889739 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-11 07:47:26.889749 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-11 07:47:26.889760 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-11 07:47:26.889771 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-11 07:47:27.048394 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-11 07:47:27.057975 | orchestrator | + set -e 2026-04-11 07:47:27.058118 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 07:47:27.058134 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 07:47:27.058146 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 07:47:27.058157 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 07:47:27.058169 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 07:47:27.058180 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 07:47:27.058193 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 07:47:27.058204 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 07:47:27.058215 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 07:47:27.058226 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 07:47:27.058237 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 07:47:27.058248 | orchestrator | ++ export ARA=false 2026-04-11 07:47:27.058315 | orchestrator | ++ ARA=false 2026-04-11 07:47:27.058328 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 07:47:27.058339 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 07:47:27.058350 | orchestrator | ++ export TEMPEST=false 2026-04-11 07:47:27.058360 | orchestrator | ++ TEMPEST=false 2026-04-11 07:47:27.058372 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 07:47:27.058403 | orchestrator | ++ IS_ZUUL=true 2026-04-11 07:47:27.058415 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 07:47:27.058426 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 07:47:27.058437 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 07:47:27.058453 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 07:47:27.058464 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 07:47:27.058475 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 07:47:27.058486 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 07:47:27.058497 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 07:47:27.058507 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 07:47:27.058518 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 07:47:27.058529 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-11 07:47:27.058540 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-11 07:47:27.058577 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-11 07:47:27.058591 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-11 07:47:27.068868 | orchestrator | + set -e 2026-04-11 07:47:27.068919 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 07:47:27.068931 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 07:47:27.068943 | orchestrator | ++ INTERACTIVE=false 2026-04-11 07:47:27.068954 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 07:47:27.068964 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 07:47:27.068975 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 07:47:27.069926 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 07:47:27.076835 | orchestrator | 2026-04-11 07:47:27.076883 | orchestrator | # Ceph status 2026-04-11 07:47:27.076896 | orchestrator | 2026-04-11 07:47:27.076907 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 07:47:27.076919 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 07:47:27.076930 | orchestrator | + echo 2026-04-11 07:47:27.076941 | orchestrator | + echo '# Ceph status' 2026-04-11 07:47:27.076952 | orchestrator | + echo 2026-04-11 07:47:27.076964 | orchestrator | + ceph -s 2026-04-11 07:47:27.756767 | orchestrator | cluster: 2026-04-11 07:47:27.756897 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-11 07:47:27.756923 | orchestrator | health: HEALTH_OK 2026-04-11 07:47:27.756941 | orchestrator | 2026-04-11 07:47:27.756958 | orchestrator | services: 2026-04-11 07:47:27.756977 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 2h) 2026-04-11 07:47:27.756999 | orchestrator | mgr: testbed-node-1(active, since 2h), standbys: testbed-node-2, testbed-node-0 2026-04-11 07:47:27.757018 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-11 07:47:27.757036 | orchestrator | osd: 6 osds: 6 up (since 109m), 6 in (since 4h) 2026-04-11 07:47:27.757055 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-11 07:47:27.757074 | orchestrator | 2026-04-11 07:47:27.757093 | orchestrator | data: 2026-04-11 07:47:27.757112 | orchestrator | volumes: 1/1 healthy 2026-04-11 07:47:27.757130 | orchestrator | pools: 14 pools, 401 pgs 2026-04-11 07:47:27.757150 | orchestrator | objects: 821 objects, 2.8 GiB 2026-04-11 07:47:27.757169 | orchestrator | usage: 7.9 GiB used, 112 GiB / 120 GiB avail 2026-04-11 07:47:27.757188 | orchestrator | pgs: 401 active+clean 2026-04-11 07:47:27.757200 | orchestrator | 2026-04-11 07:47:27.757211 | orchestrator | io: 2026-04-11 07:47:27.757222 | orchestrator | client: 1.2 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-04-11 07:47:27.757233 | orchestrator | 2026-04-11 07:47:27.801637 | orchestrator | 2026-04-11 07:47:27.801729 | orchestrator | # Ceph versions 2026-04-11 07:47:27.801742 | orchestrator | 2026-04-11 07:47:27.801753 | orchestrator | + echo 2026-04-11 07:47:27.801763 | orchestrator | + echo '# Ceph versions' 2026-04-11 07:47:27.801774 | orchestrator | + echo 2026-04-11 07:47:27.801784 | orchestrator | + ceph versions 2026-04-11 07:47:28.386702 | orchestrator | { 2026-04-11 07:47:28.386828 | orchestrator | "mon": { 2026-04-11 07:47:28.386853 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-11 07:47:28.386876 | orchestrator | }, 2026-04-11 07:47:28.386897 | orchestrator | "mgr": { 2026-04-11 07:47:28.386919 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-11 07:47:28.386940 | orchestrator | }, 2026-04-11 07:47:28.386961 | orchestrator | "osd": { 2026-04-11 07:47:28.386983 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-11 07:47:28.387004 | orchestrator | }, 2026-04-11 07:47:28.387026 | orchestrator | "mds": { 2026-04-11 07:47:28.387047 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-11 07:47:28.387097 | orchestrator | }, 2026-04-11 07:47:28.387117 | orchestrator | "rgw": { 2026-04-11 07:47:28.387137 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-11 07:47:28.387155 | orchestrator | }, 2026-04-11 07:47:28.387175 | orchestrator | "overall": { 2026-04-11 07:47:28.387194 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-11 07:47:28.387214 | orchestrator | } 2026-04-11 07:47:28.387235 | orchestrator | } 2026-04-11 07:47:28.446255 | orchestrator | 2026-04-11 07:47:28.446350 | orchestrator | # Ceph OSD tree 2026-04-11 07:47:28.446363 | orchestrator | 2026-04-11 07:47:28.446375 | orchestrator | + echo 2026-04-11 07:47:28.446387 | orchestrator | + echo '# Ceph OSD tree' 2026-04-11 07:47:28.446398 | orchestrator | + echo 2026-04-11 07:47:28.446409 | orchestrator | + ceph osd df tree 2026-04-11 07:47:28.982308 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-11 07:47:28.982414 | orchestrator | -1 0.11691 - 120 GiB 7.9 GiB 7.6 GiB 45 KiB 317 MiB 112 GiB 6.62 1.00 - root default 2026-04-11 07:47:28.982429 | orchestrator | -5 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 96 MiB 37 GiB 6.60 1.00 - host testbed-node-3 2026-04-11 07:47:28.982441 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 7 KiB 50 MiB 19 GiB 6.77 1.02 190 up osd.0 2026-04-11 07:47:28.982452 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 8 KiB 46 MiB 19 GiB 6.43 0.97 202 up osd.4 2026-04-11 07:47:28.982463 | orchestrator | -3 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 109 MiB 37 GiB 6.63 1.00 - host testbed-node-4 2026-04-11 07:47:28.982474 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 8 KiB 54 MiB 19 GiB 6.76 1.02 195 up osd.2 2026-04-11 07:47:28.982485 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 7 KiB 54 MiB 19 GiB 6.49 0.98 195 up osd.5 2026-04-11 07:47:28.982496 | orchestrator | -7 0.03897 - 40 GiB 2.7 GiB 2.5 GiB 15 KiB 113 MiB 37 GiB 6.64 1.00 - host testbed-node-5 2026-04-11 07:47:28.982507 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 9 KiB 54 MiB 18 GiB 7.54 1.14 185 up osd.1 2026-04-11 07:47:28.982518 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 6 KiB 58 MiB 19 GiB 5.74 0.87 203 up osd.3 2026-04-11 07:47:28.982529 | orchestrator | TOTAL 120 GiB 7.9 GiB 7.6 GiB 48 KiB 317 MiB 112 GiB 6.62 2026-04-11 07:47:28.982540 | orchestrator | MIN/MAX VAR: 0.87/1.14 STDDEV: 0.54 2026-04-11 07:47:29.030008 | orchestrator | 2026-04-11 07:47:29.030154 | orchestrator | # Ceph monitor status 2026-04-11 07:47:29.030170 | orchestrator | 2026-04-11 07:47:29.030182 | orchestrator | + echo 2026-04-11 07:47:29.030193 | orchestrator | + echo '# Ceph monitor status' 2026-04-11 07:47:29.030204 | orchestrator | + echo 2026-04-11 07:47:29.030215 | orchestrator | + ceph mon stat 2026-04-11 07:47:29.617535 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 34, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-11 07:47:29.661340 | orchestrator | 2026-04-11 07:47:29.661432 | orchestrator | # Ceph quorum status 2026-04-11 07:47:29.661447 | orchestrator | 2026-04-11 07:47:29.661459 | orchestrator | + echo 2026-04-11 07:47:29.661470 | orchestrator | + echo '# Ceph quorum status' 2026-04-11 07:47:29.661481 | orchestrator | + echo 2026-04-11 07:47:29.662174 | orchestrator | + ceph quorum_status 2026-04-11 07:47:29.662199 | orchestrator | + jq 2026-04-11 07:47:30.274476 | orchestrator | { 2026-04-11 07:47:30.274623 | orchestrator | "election_epoch": 34, 2026-04-11 07:47:30.274641 | orchestrator | "quorum": [ 2026-04-11 07:47:30.274652 | orchestrator | 0, 2026-04-11 07:47:30.274664 | orchestrator | 1, 2026-04-11 07:47:30.274675 | orchestrator | 2 2026-04-11 07:47:30.274685 | orchestrator | ], 2026-04-11 07:47:30.274726 | orchestrator | "quorum_names": [ 2026-04-11 07:47:30.274737 | orchestrator | "testbed-node-0", 2026-04-11 07:47:30.274748 | orchestrator | "testbed-node-1", 2026-04-11 07:47:30.274759 | orchestrator | "testbed-node-2" 2026-04-11 07:47:30.274769 | orchestrator | ], 2026-04-11 07:47:30.274780 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-11 07:47:30.274792 | orchestrator | "quorum_age": 8324, 2026-04-11 07:47:30.274803 | orchestrator | "features": { 2026-04-11 07:47:30.274813 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-11 07:47:30.274824 | orchestrator | "quorum_mon": [ 2026-04-11 07:47:30.274835 | orchestrator | "kraken", 2026-04-11 07:47:30.274845 | orchestrator | "luminous", 2026-04-11 07:47:30.274856 | orchestrator | "mimic", 2026-04-11 07:47:30.274866 | orchestrator | "osdmap-prune", 2026-04-11 07:47:30.274877 | orchestrator | "nautilus", 2026-04-11 07:47:30.274888 | orchestrator | "octopus", 2026-04-11 07:47:30.274898 | orchestrator | "pacific", 2026-04-11 07:47:30.274909 | orchestrator | "elector-pinging", 2026-04-11 07:47:30.274919 | orchestrator | "quincy", 2026-04-11 07:47:30.274930 | orchestrator | "reef" 2026-04-11 07:47:30.274940 | orchestrator | ] 2026-04-11 07:47:30.274951 | orchestrator | }, 2026-04-11 07:47:30.274962 | orchestrator | "monmap": { 2026-04-11 07:47:30.274972 | orchestrator | "epoch": 1, 2026-04-11 07:47:30.274983 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-11 07:47:30.274995 | orchestrator | "modified": "2026-04-11T02:56:22.811149Z", 2026-04-11 07:47:30.275005 | orchestrator | "created": "2026-04-11T02:56:22.811149Z", 2026-04-11 07:47:30.275016 | orchestrator | "min_mon_release": 18, 2026-04-11 07:47:30.275027 | orchestrator | "min_mon_release_name": "reef", 2026-04-11 07:47:30.275037 | orchestrator | "election_strategy": 1, 2026-04-11 07:47:30.275050 | orchestrator | "disallowed_leaders: ": "", 2026-04-11 07:47:30.275063 | orchestrator | "stretch_mode": false, 2026-04-11 07:47:30.275075 | orchestrator | "tiebreaker_mon": "", 2026-04-11 07:47:30.275087 | orchestrator | "removed_ranks: ": "", 2026-04-11 07:47:30.275099 | orchestrator | "features": { 2026-04-11 07:47:30.275112 | orchestrator | "persistent": [ 2026-04-11 07:47:30.275124 | orchestrator | "kraken", 2026-04-11 07:47:30.275136 | orchestrator | "luminous", 2026-04-11 07:47:30.275148 | orchestrator | "mimic", 2026-04-11 07:47:30.275159 | orchestrator | "osdmap-prune", 2026-04-11 07:47:30.275171 | orchestrator | "nautilus", 2026-04-11 07:47:30.275183 | orchestrator | "octopus", 2026-04-11 07:47:30.275195 | orchestrator | "pacific", 2026-04-11 07:47:30.275208 | orchestrator | "elector-pinging", 2026-04-11 07:47:30.275220 | orchestrator | "quincy", 2026-04-11 07:47:30.275233 | orchestrator | "reef" 2026-04-11 07:47:30.275246 | orchestrator | ], 2026-04-11 07:47:30.275258 | orchestrator | "optional": [] 2026-04-11 07:47:30.275270 | orchestrator | }, 2026-04-11 07:47:30.275282 | orchestrator | "mons": [ 2026-04-11 07:47:30.275294 | orchestrator | { 2026-04-11 07:47:30.275306 | orchestrator | "rank": 0, 2026-04-11 07:47:30.275318 | orchestrator | "name": "testbed-node-0", 2026-04-11 07:47:30.275332 | orchestrator | "public_addrs": { 2026-04-11 07:47:30.275343 | orchestrator | "addrvec": [ 2026-04-11 07:47:30.275355 | orchestrator | { 2026-04-11 07:47:30.275367 | orchestrator | "type": "v2", 2026-04-11 07:47:30.275381 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-11 07:47:30.275393 | orchestrator | "nonce": 0 2026-04-11 07:47:30.275405 | orchestrator | }, 2026-04-11 07:47:30.275415 | orchestrator | { 2026-04-11 07:47:30.275426 | orchestrator | "type": "v1", 2026-04-11 07:47:30.275436 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-11 07:47:30.275447 | orchestrator | "nonce": 0 2026-04-11 07:47:30.275457 | orchestrator | } 2026-04-11 07:47:30.275468 | orchestrator | ] 2026-04-11 07:47:30.275479 | orchestrator | }, 2026-04-11 07:47:30.275489 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-11 07:47:30.275500 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-11 07:47:30.275511 | orchestrator | "priority": 0, 2026-04-11 07:47:30.275521 | orchestrator | "weight": 0, 2026-04-11 07:47:30.275532 | orchestrator | "crush_location": "{}" 2026-04-11 07:47:30.275542 | orchestrator | }, 2026-04-11 07:47:30.275580 | orchestrator | { 2026-04-11 07:47:30.275591 | orchestrator | "rank": 1, 2026-04-11 07:47:30.275602 | orchestrator | "name": "testbed-node-1", 2026-04-11 07:47:30.275613 | orchestrator | "public_addrs": { 2026-04-11 07:47:30.275623 | orchestrator | "addrvec": [ 2026-04-11 07:47:30.275634 | orchestrator | { 2026-04-11 07:47:30.275644 | orchestrator | "type": "v2", 2026-04-11 07:47:30.275663 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-11 07:47:30.275678 | orchestrator | "nonce": 0 2026-04-11 07:47:30.275689 | orchestrator | }, 2026-04-11 07:47:30.275700 | orchestrator | { 2026-04-11 07:47:30.275710 | orchestrator | "type": "v1", 2026-04-11 07:47:30.275721 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-11 07:47:30.275732 | orchestrator | "nonce": 0 2026-04-11 07:47:30.275742 | orchestrator | } 2026-04-11 07:47:30.275753 | orchestrator | ] 2026-04-11 07:47:30.275763 | orchestrator | }, 2026-04-11 07:47:30.275774 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-11 07:47:30.275785 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-11 07:47:30.275795 | orchestrator | "priority": 0, 2026-04-11 07:47:30.275806 | orchestrator | "weight": 0, 2026-04-11 07:47:30.275816 | orchestrator | "crush_location": "{}" 2026-04-11 07:47:30.275827 | orchestrator | }, 2026-04-11 07:47:30.275837 | orchestrator | { 2026-04-11 07:47:30.275848 | orchestrator | "rank": 2, 2026-04-11 07:47:30.275858 | orchestrator | "name": "testbed-node-2", 2026-04-11 07:47:30.275869 | orchestrator | "public_addrs": { 2026-04-11 07:47:30.275880 | orchestrator | "addrvec": [ 2026-04-11 07:47:30.275890 | orchestrator | { 2026-04-11 07:47:30.275900 | orchestrator | "type": "v2", 2026-04-11 07:47:30.275911 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-11 07:47:30.275921 | orchestrator | "nonce": 0 2026-04-11 07:47:30.275932 | orchestrator | }, 2026-04-11 07:47:30.275942 | orchestrator | { 2026-04-11 07:47:30.275953 | orchestrator | "type": "v1", 2026-04-11 07:47:30.275963 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-11 07:47:30.275974 | orchestrator | "nonce": 0 2026-04-11 07:47:30.275984 | orchestrator | } 2026-04-11 07:47:30.275995 | orchestrator | ] 2026-04-11 07:47:30.276005 | orchestrator | }, 2026-04-11 07:47:30.276016 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-11 07:47:30.276027 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-11 07:47:30.276037 | orchestrator | "priority": 0, 2026-04-11 07:47:30.276048 | orchestrator | "weight": 0, 2026-04-11 07:47:30.276058 | orchestrator | "crush_location": "{}" 2026-04-11 07:47:30.276068 | orchestrator | } 2026-04-11 07:47:30.276079 | orchestrator | ] 2026-04-11 07:47:30.276090 | orchestrator | } 2026-04-11 07:47:30.276100 | orchestrator | } 2026-04-11 07:47:30.276111 | orchestrator | 2026-04-11 07:47:30.276122 | orchestrator | # Ceph free space status 2026-04-11 07:47:30.276133 | orchestrator | 2026-04-11 07:47:30.276144 | orchestrator | + echo 2026-04-11 07:47:30.276154 | orchestrator | + echo '# Ceph free space status' 2026-04-11 07:47:30.276165 | orchestrator | + echo 2026-04-11 07:47:30.276176 | orchestrator | + ceph df 2026-04-11 07:47:30.866175 | orchestrator | --- RAW STORAGE --- 2026-04-11 07:47:30.866270 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-11 07:47:30.866296 | orchestrator | hdd 120 GiB 112 GiB 7.9 GiB 7.9 GiB 6.62 2026-04-11 07:47:30.866307 | orchestrator | TOTAL 120 GiB 112 GiB 7.9 GiB 7.9 GiB 6.62 2026-04-11 07:47:30.866316 | orchestrator | 2026-04-11 07:47:30.866326 | orchestrator | --- POOLS --- 2026-04-11 07:47:30.866336 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-11 07:47:30.866347 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-11 07:47:30.866356 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-11 07:47:30.866365 | orchestrator | cephfs_metadata 3 16 12 KiB 22 126 KiB 0 35 GiB 2026-04-11 07:47:30.866374 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-11 07:47:30.866382 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-11 07:47:30.866391 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-11 07:47:30.866399 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-11 07:47:30.866419 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-11 07:47:30.866428 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-04-11 07:47:30.866437 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-11 07:47:30.866445 | orchestrator | volumes 11 32 325 MiB 267 974 MiB 0.90 35 GiB 2026-04-11 07:47:30.866472 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-04-11 07:47:30.866481 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-11 07:47:30.866490 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-11 07:47:30.911154 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-11 07:47:30.956989 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-11 07:47:30.957100 | orchestrator | + osism apply facts 2026-04-11 07:47:32.241696 | orchestrator | 2026-04-11 07:47:32 | INFO  | Prepare task for execution of facts. 2026-04-11 07:47:32.316746 | orchestrator | 2026-04-11 07:47:32 | INFO  | Task a6c1847b-8018-4398-972e-eb8db50e7eb0 (facts) was prepared for execution. 2026-04-11 07:47:32.316815 | orchestrator | 2026-04-11 07:47:32 | INFO  | It takes a moment until task a6c1847b-8018-4398-972e-eb8db50e7eb0 (facts) has been started and output is visible here. 2026-04-11 07:47:55.147100 | orchestrator | 2026-04-11 07:47:55.147225 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-11 07:47:55.147246 | orchestrator | 2026-04-11 07:47:55.147259 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-11 07:47:55.147273 | orchestrator | Saturday 11 April 2026 07:47:38 +0000 (0:00:02.072) 0:00:02.072 ******** 2026-04-11 07:47:55.147286 | orchestrator | ok: [testbed-manager] 2026-04-11 07:47:55.147299 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:47:55.147312 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:47:55.147324 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:47:55.147335 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:47:55.147347 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:47:55.147360 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:47:55.147373 | orchestrator | 2026-04-11 07:47:55.147385 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-11 07:47:55.147399 | orchestrator | Saturday 11 April 2026 07:47:41 +0000 (0:00:03.124) 0:00:05.196 ******** 2026-04-11 07:47:55.147413 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:47:55.147428 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:47:55.147442 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:47:55.147509 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:47:55.147524 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:47:55.147591 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:47:55.147606 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:47:55.147620 | orchestrator | 2026-04-11 07:47:55.147635 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-11 07:47:55.147663 | orchestrator | 2026-04-11 07:47:55.147678 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-11 07:47:55.147694 | orchestrator | Saturday 11 April 2026 07:47:44 +0000 (0:00:02.845) 0:00:08.042 ******** 2026-04-11 07:47:55.147710 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:47:55.147724 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:47:55.147740 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:47:55.147756 | orchestrator | ok: [testbed-manager] 2026-04-11 07:47:55.147773 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:47:55.147788 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:47:55.147803 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:47:55.147818 | orchestrator | 2026-04-11 07:47:55.147832 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-11 07:47:55.147847 | orchestrator | 2026-04-11 07:47:55.147861 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-11 07:47:55.147875 | orchestrator | Saturday 11 April 2026 07:47:51 +0000 (0:00:07.882) 0:00:15.925 ******** 2026-04-11 07:47:55.147890 | orchestrator | skipping: [testbed-manager] 2026-04-11 07:47:55.147905 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:47:55.147920 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:47:55.147935 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:47:55.147949 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:47:55.147995 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:47:55.148011 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:47:55.148026 | orchestrator | 2026-04-11 07:47:55.148040 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:47:55.148055 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:47:55.148069 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:47:55.148084 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:47:55.148098 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:47:55.148113 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:47:55.148127 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:47:55.148140 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:47:55.148152 | orchestrator | 2026-04-11 07:47:55.148164 | orchestrator | 2026-04-11 07:47:55.148177 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:47:55.148190 | orchestrator | Saturday 11 April 2026 07:47:54 +0000 (0:00:02.788) 0:00:18.713 ******** 2026-04-11 07:47:55.148202 | orchestrator | =============================================================================== 2026-04-11 07:47:55.148214 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.88s 2026-04-11 07:47:55.148226 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.12s 2026-04-11 07:47:55.148239 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.85s 2026-04-11 07:47:55.148251 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.79s 2026-04-11 07:47:55.385407 | orchestrator | + osism validate ceph-mons 2026-04-11 07:49:05.170840 | orchestrator | 2026-04-11 07:49:05.170959 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-11 07:49:05.170978 | orchestrator | 2026-04-11 07:49:05.170990 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-11 07:49:05.171001 | orchestrator | Saturday 11 April 2026 07:48:12 +0000 (0:00:01.789) 0:00:01.789 ******** 2026-04-11 07:49:05.171013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:49:05.171025 | orchestrator | 2026-04-11 07:49:05.171036 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-11 07:49:05.171047 | orchestrator | Saturday 11 April 2026 07:48:14 +0000 (0:00:02.804) 0:00:04.594 ******** 2026-04-11 07:49:05.171058 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:49:05.171069 | orchestrator | 2026-04-11 07:49:05.171080 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-11 07:49:05.171090 | orchestrator | Saturday 11 April 2026 07:48:16 +0000 (0:00:01.681) 0:00:06.275 ******** 2026-04-11 07:49:05.171101 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.171113 | orchestrator | 2026-04-11 07:49:05.171124 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-11 07:49:05.171135 | orchestrator | Saturday 11 April 2026 07:48:17 +0000 (0:00:01.113) 0:00:07.389 ******** 2026-04-11 07:49:05.171146 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.171157 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:49:05.171168 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:49:05.171178 | orchestrator | 2026-04-11 07:49:05.171189 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-11 07:49:05.171225 | orchestrator | Saturday 11 April 2026 07:48:19 +0000 (0:00:01.784) 0:00:09.173 ******** 2026-04-11 07:49:05.171237 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:49:05.171249 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.171259 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:49:05.171270 | orchestrator | 2026-04-11 07:49:05.171281 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-11 07:49:05.171291 | orchestrator | Saturday 11 April 2026 07:48:21 +0000 (0:00:02.518) 0:00:11.692 ******** 2026-04-11 07:49:05.171302 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.171313 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:49:05.171324 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:49:05.171334 | orchestrator | 2026-04-11 07:49:05.171345 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-11 07:49:05.171356 | orchestrator | Saturday 11 April 2026 07:48:23 +0000 (0:00:01.356) 0:00:13.049 ******** 2026-04-11 07:49:05.171367 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.171380 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:49:05.171393 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:49:05.171406 | orchestrator | 2026-04-11 07:49:05.171418 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 07:49:05.171446 | orchestrator | Saturday 11 April 2026 07:48:24 +0000 (0:00:01.350) 0:00:14.399 ******** 2026-04-11 07:49:05.171460 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.171502 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:49:05.171515 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:49:05.171527 | orchestrator | 2026-04-11 07:49:05.171539 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-11 07:49:05.171550 | orchestrator | Saturday 11 April 2026 07:48:26 +0000 (0:00:01.385) 0:00:15.785 ******** 2026-04-11 07:49:05.171561 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.171572 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:49:05.171582 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:49:05.171593 | orchestrator | 2026-04-11 07:49:05.171604 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-11 07:49:05.171615 | orchestrator | Saturday 11 April 2026 07:48:27 +0000 (0:00:01.362) 0:00:17.147 ******** 2026-04-11 07:49:05.171632 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.171650 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:49:05.171669 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:49:05.171687 | orchestrator | 2026-04-11 07:49:05.171706 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 07:49:05.171726 | orchestrator | Saturday 11 April 2026 07:48:28 +0000 (0:00:01.378) 0:00:18.526 ******** 2026-04-11 07:49:05.171745 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.171763 | orchestrator | 2026-04-11 07:49:05.171774 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 07:49:05.171785 | orchestrator | Saturday 11 April 2026 07:48:30 +0000 (0:00:01.255) 0:00:19.781 ******** 2026-04-11 07:49:05.171796 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.171806 | orchestrator | 2026-04-11 07:49:05.171817 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 07:49:05.171828 | orchestrator | Saturday 11 April 2026 07:48:31 +0000 (0:00:01.290) 0:00:21.072 ******** 2026-04-11 07:49:05.171838 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.171849 | orchestrator | 2026-04-11 07:49:05.171859 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:49:05.171870 | orchestrator | Saturday 11 April 2026 07:48:32 +0000 (0:00:01.291) 0:00:22.364 ******** 2026-04-11 07:49:05.171881 | orchestrator | 2026-04-11 07:49:05.171891 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:49:05.171902 | orchestrator | Saturday 11 April 2026 07:48:33 +0000 (0:00:00.485) 0:00:22.850 ******** 2026-04-11 07:49:05.171912 | orchestrator | 2026-04-11 07:49:05.171923 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:49:05.171950 | orchestrator | Saturday 11 April 2026 07:48:33 +0000 (0:00:00.588) 0:00:23.439 ******** 2026-04-11 07:49:05.171961 | orchestrator | 2026-04-11 07:49:05.171971 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 07:49:05.171982 | orchestrator | Saturday 11 April 2026 07:48:34 +0000 (0:00:00.797) 0:00:24.236 ******** 2026-04-11 07:49:05.171993 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.172003 | orchestrator | 2026-04-11 07:49:05.172014 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-11 07:49:05.172025 | orchestrator | Saturday 11 April 2026 07:48:35 +0000 (0:00:01.288) 0:00:25.525 ******** 2026-04-11 07:49:05.172035 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.172046 | orchestrator | 2026-04-11 07:49:05.172074 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-11 07:49:05.172085 | orchestrator | Saturday 11 April 2026 07:48:37 +0000 (0:00:01.299) 0:00:26.824 ******** 2026-04-11 07:49:05.172096 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.172106 | orchestrator | 2026-04-11 07:49:05.172117 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-11 07:49:05.172128 | orchestrator | Saturday 11 April 2026 07:48:38 +0000 (0:00:01.143) 0:00:27.968 ******** 2026-04-11 07:49:05.172138 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:49:05.172149 | orchestrator | 2026-04-11 07:49:05.172160 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-11 07:49:05.172170 | orchestrator | Saturday 11 April 2026 07:48:40 +0000 (0:00:02.607) 0:00:30.576 ******** 2026-04-11 07:49:05.172181 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.172191 | orchestrator | 2026-04-11 07:49:05.172202 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-11 07:49:05.172212 | orchestrator | Saturday 11 April 2026 07:48:42 +0000 (0:00:01.375) 0:00:31.951 ******** 2026-04-11 07:49:05.172223 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.172234 | orchestrator | 2026-04-11 07:49:05.172244 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-11 07:49:05.172255 | orchestrator | Saturday 11 April 2026 07:48:43 +0000 (0:00:01.200) 0:00:33.152 ******** 2026-04-11 07:49:05.172266 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.172276 | orchestrator | 2026-04-11 07:49:05.172287 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-11 07:49:05.172298 | orchestrator | Saturday 11 April 2026 07:48:44 +0000 (0:00:01.315) 0:00:34.467 ******** 2026-04-11 07:49:05.172308 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.172319 | orchestrator | 2026-04-11 07:49:05.172330 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-11 07:49:05.172340 | orchestrator | Saturday 11 April 2026 07:48:46 +0000 (0:00:01.326) 0:00:35.794 ******** 2026-04-11 07:49:05.172351 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.172411 | orchestrator | 2026-04-11 07:49:05.172424 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-11 07:49:05.172435 | orchestrator | Saturday 11 April 2026 07:48:47 +0000 (0:00:01.130) 0:00:36.925 ******** 2026-04-11 07:49:05.172446 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.172457 | orchestrator | 2026-04-11 07:49:05.172467 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-11 07:49:05.172510 | orchestrator | Saturday 11 April 2026 07:48:48 +0000 (0:00:01.194) 0:00:38.119 ******** 2026-04-11 07:49:05.172522 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.172532 | orchestrator | 2026-04-11 07:49:05.172543 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-11 07:49:05.172554 | orchestrator | Saturday 11 April 2026 07:48:49 +0000 (0:00:01.122) 0:00:39.242 ******** 2026-04-11 07:49:05.172564 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:49:05.172575 | orchestrator | 2026-04-11 07:49:05.172586 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-11 07:49:05.172597 | orchestrator | Saturday 11 April 2026 07:48:51 +0000 (0:00:02.288) 0:00:41.531 ******** 2026-04-11 07:49:05.172616 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.172627 | orchestrator | 2026-04-11 07:49:05.172638 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-11 07:49:05.172648 | orchestrator | Saturday 11 April 2026 07:48:53 +0000 (0:00:01.290) 0:00:42.821 ******** 2026-04-11 07:49:05.172659 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.172670 | orchestrator | 2026-04-11 07:49:05.172683 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-11 07:49:05.172702 | orchestrator | Saturday 11 April 2026 07:48:54 +0000 (0:00:01.168) 0:00:43.990 ******** 2026-04-11 07:49:05.172721 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:49:05.172741 | orchestrator | 2026-04-11 07:49:05.172761 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-11 07:49:05.172779 | orchestrator | Saturday 11 April 2026 07:48:55 +0000 (0:00:01.169) 0:00:45.160 ******** 2026-04-11 07:49:05.172797 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.172808 | orchestrator | 2026-04-11 07:49:05.172819 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-11 07:49:05.172829 | orchestrator | Saturday 11 April 2026 07:48:56 +0000 (0:00:01.153) 0:00:46.313 ******** 2026-04-11 07:49:05.172840 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.172850 | orchestrator | 2026-04-11 07:49:05.172861 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-11 07:49:05.172872 | orchestrator | Saturday 11 April 2026 07:48:57 +0000 (0:00:01.134) 0:00:47.448 ******** 2026-04-11 07:49:05.172882 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:49:05.172893 | orchestrator | 2026-04-11 07:49:05.172904 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-11 07:49:05.172914 | orchestrator | Saturday 11 April 2026 07:48:59 +0000 (0:00:01.318) 0:00:48.767 ******** 2026-04-11 07:49:05.172925 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:49:05.172935 | orchestrator | 2026-04-11 07:49:05.172946 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 07:49:05.172956 | orchestrator | Saturday 11 April 2026 07:49:00 +0000 (0:00:01.298) 0:00:50.065 ******** 2026-04-11 07:49:05.172973 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:49:05.172984 | orchestrator | 2026-04-11 07:49:05.172995 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 07:49:05.173006 | orchestrator | Saturday 11 April 2026 07:49:03 +0000 (0:00:02.979) 0:00:53.044 ******** 2026-04-11 07:49:05.173016 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:49:05.173027 | orchestrator | 2026-04-11 07:49:05.173037 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 07:49:05.173048 | orchestrator | Saturday 11 April 2026 07:49:04 +0000 (0:00:01.497) 0:00:54.541 ******** 2026-04-11 07:49:05.173059 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:49:05.173069 | orchestrator | 2026-04-11 07:49:05.173088 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:49:12.327097 | orchestrator | Saturday 11 April 2026 07:49:06 +0000 (0:00:01.285) 0:00:55.827 ******** 2026-04-11 07:49:12.327211 | orchestrator | 2026-04-11 07:49:12.327228 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:49:12.327241 | orchestrator | Saturday 11 April 2026 07:49:06 +0000 (0:00:00.430) 0:00:56.258 ******** 2026-04-11 07:49:12.327252 | orchestrator | 2026-04-11 07:49:12.327263 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:49:12.327274 | orchestrator | Saturday 11 April 2026 07:49:07 +0000 (0:00:00.481) 0:00:56.739 ******** 2026-04-11 07:49:12.327285 | orchestrator | 2026-04-11 07:49:12.327295 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-11 07:49:12.327306 | orchestrator | Saturday 11 April 2026 07:49:07 +0000 (0:00:00.833) 0:00:57.572 ******** 2026-04-11 07:49:12.327345 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:49:12.327357 | orchestrator | 2026-04-11 07:49:12.327368 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 07:49:12.327378 | orchestrator | Saturday 11 April 2026 07:49:10 +0000 (0:00:02.413) 0:00:59.986 ******** 2026-04-11 07:49:12.327389 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-11 07:49:12.327399 | orchestrator |  "msg": [ 2026-04-11 07:49:12.327411 | orchestrator |  "Validator run completed.", 2026-04-11 07:49:12.327423 | orchestrator |  "You can find the report file here:", 2026-04-11 07:49:12.327434 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-11T07:48:13+00:00-report.json", 2026-04-11 07:49:12.327445 | orchestrator |  "on the following host:", 2026-04-11 07:49:12.327456 | orchestrator |  "testbed-manager" 2026-04-11 07:49:12.327514 | orchestrator |  ] 2026-04-11 07:49:12.327526 | orchestrator | } 2026-04-11 07:49:12.327537 | orchestrator | 2026-04-11 07:49:12.327549 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:49:12.327561 | orchestrator | testbed-node-0 : ok=24  changed=4  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-11 07:49:12.327574 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:49:12.327585 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:49:12.327596 | orchestrator | 2026-04-11 07:49:12.327606 | orchestrator | 2026-04-11 07:49:12.327617 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:49:12.327628 | orchestrator | Saturday 11 April 2026 07:49:11 +0000 (0:00:01.683) 0:01:01.669 ******** 2026-04-11 07:49:12.327638 | orchestrator | =============================================================================== 2026-04-11 07:49:12.327649 | orchestrator | Aggregate test results step one ----------------------------------------- 2.98s 2026-04-11 07:49:12.327660 | orchestrator | Get timestamp for report file ------------------------------------------- 2.81s 2026-04-11 07:49:12.327670 | orchestrator | Get monmap info from one mon container ---------------------------------- 2.61s 2026-04-11 07:49:12.327681 | orchestrator | Get container info ------------------------------------------------------ 2.52s 2026-04-11 07:49:12.327692 | orchestrator | Write report file ------------------------------------------------------- 2.41s 2026-04-11 07:49:12.327702 | orchestrator | Gather status data ------------------------------------------------------ 2.29s 2026-04-11 07:49:12.327713 | orchestrator | Flush handlers ---------------------------------------------------------- 1.87s 2026-04-11 07:49:12.327723 | orchestrator | Prepare test data for container existance test -------------------------- 1.78s 2026-04-11 07:49:12.327734 | orchestrator | Flush handlers ---------------------------------------------------------- 1.75s 2026-04-11 07:49:12.327744 | orchestrator | Print report file information ------------------------------------------- 1.68s 2026-04-11 07:49:12.327755 | orchestrator | Create report output directory ------------------------------------------ 1.68s 2026-04-11 07:49:12.327766 | orchestrator | Aggregate test results step two ----------------------------------------- 1.50s 2026-04-11 07:49:12.327776 | orchestrator | Prepare test data ------------------------------------------------------- 1.39s 2026-04-11 07:49:12.327787 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 1.38s 2026-04-11 07:49:12.327797 | orchestrator | Set quorum test data ---------------------------------------------------- 1.38s 2026-04-11 07:49:12.327808 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 1.36s 2026-04-11 07:49:12.327818 | orchestrator | Set test result to failed if container is missing ----------------------- 1.36s 2026-04-11 07:49:12.327829 | orchestrator | Set test result to passed if container is existing ---------------------- 1.35s 2026-04-11 07:49:12.327840 | orchestrator | Set fsid test vars ------------------------------------------------------ 1.33s 2026-04-11 07:49:12.327874 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.32s 2026-04-11 07:49:12.522909 | orchestrator | + osism validate ceph-mgrs 2026-04-11 07:50:15.620099 | orchestrator | 2026-04-11 07:50:15.620218 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-11 07:50:15.620236 | orchestrator | 2026-04-11 07:50:15.620248 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-11 07:50:15.620260 | orchestrator | Saturday 11 April 2026 07:49:29 +0000 (0:00:01.714) 0:00:01.714 ******** 2026-04-11 07:50:15.620272 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:15.620283 | orchestrator | 2026-04-11 07:50:15.620294 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-11 07:50:15.620305 | orchestrator | Saturday 11 April 2026 07:49:31 +0000 (0:00:02.737) 0:00:04.452 ******** 2026-04-11 07:50:15.620316 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:15.620327 | orchestrator | 2026-04-11 07:50:15.620338 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-11 07:50:15.620349 | orchestrator | Saturday 11 April 2026 07:49:33 +0000 (0:00:01.786) 0:00:06.238 ******** 2026-04-11 07:50:15.620360 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.620372 | orchestrator | 2026-04-11 07:50:15.620383 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-11 07:50:15.620394 | orchestrator | Saturday 11 April 2026 07:49:34 +0000 (0:00:01.134) 0:00:07.373 ******** 2026-04-11 07:50:15.620405 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.620416 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:50:15.620427 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:50:15.620438 | orchestrator | 2026-04-11 07:50:15.620524 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-11 07:50:15.620537 | orchestrator | Saturday 11 April 2026 07:49:36 +0000 (0:00:01.834) 0:00:09.208 ******** 2026-04-11 07:50:15.620548 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:50:15.620559 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:50:15.620570 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.620580 | orchestrator | 2026-04-11 07:50:15.620591 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-11 07:50:15.620602 | orchestrator | Saturday 11 April 2026 07:49:39 +0000 (0:00:02.574) 0:00:11.783 ******** 2026-04-11 07:50:15.620613 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.620624 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:50:15.620635 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:50:15.620646 | orchestrator | 2026-04-11 07:50:15.620657 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-11 07:50:15.620668 | orchestrator | Saturday 11 April 2026 07:49:40 +0000 (0:00:01.432) 0:00:13.216 ******** 2026-04-11 07:50:15.620679 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.620690 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:50:15.620701 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:50:15.620712 | orchestrator | 2026-04-11 07:50:15.620723 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 07:50:15.620734 | orchestrator | Saturday 11 April 2026 07:49:42 +0000 (0:00:01.335) 0:00:14.551 ******** 2026-04-11 07:50:15.620745 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.620755 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:50:15.620766 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:50:15.620777 | orchestrator | 2026-04-11 07:50:15.620788 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-11 07:50:15.620799 | orchestrator | Saturday 11 April 2026 07:49:43 +0000 (0:00:01.368) 0:00:15.919 ******** 2026-04-11 07:50:15.620809 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.620820 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:50:15.620831 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:50:15.620842 | orchestrator | 2026-04-11 07:50:15.620853 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-11 07:50:15.620888 | orchestrator | Saturday 11 April 2026 07:49:44 +0000 (0:00:01.368) 0:00:17.287 ******** 2026-04-11 07:50:15.620899 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.620911 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:50:15.620922 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:50:15.620932 | orchestrator | 2026-04-11 07:50:15.620943 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 07:50:15.620954 | orchestrator | Saturday 11 April 2026 07:49:46 +0000 (0:00:01.324) 0:00:18.612 ******** 2026-04-11 07:50:15.620965 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.620976 | orchestrator | 2026-04-11 07:50:15.620987 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 07:50:15.620998 | orchestrator | Saturday 11 April 2026 07:49:47 +0000 (0:00:01.265) 0:00:19.877 ******** 2026-04-11 07:50:15.621009 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.621020 | orchestrator | 2026-04-11 07:50:15.621031 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 07:50:15.621041 | orchestrator | Saturday 11 April 2026 07:49:48 +0000 (0:00:01.241) 0:00:21.119 ******** 2026-04-11 07:50:15.621052 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.621063 | orchestrator | 2026-04-11 07:50:15.621074 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:50:15.621085 | orchestrator | Saturday 11 April 2026 07:49:49 +0000 (0:00:01.271) 0:00:22.391 ******** 2026-04-11 07:50:15.621096 | orchestrator | 2026-04-11 07:50:15.621107 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:50:15.621118 | orchestrator | Saturday 11 April 2026 07:49:50 +0000 (0:00:00.452) 0:00:22.843 ******** 2026-04-11 07:50:15.621129 | orchestrator | 2026-04-11 07:50:15.621139 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:50:15.621150 | orchestrator | Saturday 11 April 2026 07:49:50 +0000 (0:00:00.669) 0:00:23.512 ******** 2026-04-11 07:50:15.621161 | orchestrator | 2026-04-11 07:50:15.621172 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 07:50:15.621182 | orchestrator | Saturday 11 April 2026 07:49:51 +0000 (0:00:00.797) 0:00:24.310 ******** 2026-04-11 07:50:15.621193 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.621204 | orchestrator | 2026-04-11 07:50:15.621215 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-11 07:50:15.621226 | orchestrator | Saturday 11 April 2026 07:49:53 +0000 (0:00:01.333) 0:00:25.644 ******** 2026-04-11 07:50:15.621237 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.621248 | orchestrator | 2026-04-11 07:50:15.621277 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-11 07:50:15.621288 | orchestrator | Saturday 11 April 2026 07:49:54 +0000 (0:00:01.278) 0:00:26.922 ******** 2026-04-11 07:50:15.621299 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.621310 | orchestrator | 2026-04-11 07:50:15.621321 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-11 07:50:15.621332 | orchestrator | Saturday 11 April 2026 07:49:55 +0000 (0:00:01.126) 0:00:28.049 ******** 2026-04-11 07:50:15.621342 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:50:15.621353 | orchestrator | 2026-04-11 07:50:15.621364 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-11 07:50:15.621375 | orchestrator | Saturday 11 April 2026 07:49:58 +0000 (0:00:03.013) 0:00:31.062 ******** 2026-04-11 07:50:15.621386 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.621397 | orchestrator | 2026-04-11 07:50:15.621408 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-11 07:50:15.621418 | orchestrator | Saturday 11 April 2026 07:49:59 +0000 (0:00:01.324) 0:00:32.387 ******** 2026-04-11 07:50:15.621429 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.621458 | orchestrator | 2026-04-11 07:50:15.621469 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-11 07:50:15.621490 | orchestrator | Saturday 11 April 2026 07:50:01 +0000 (0:00:01.266) 0:00:33.654 ******** 2026-04-11 07:50:15.621501 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.621512 | orchestrator | 2026-04-11 07:50:15.621523 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-11 07:50:15.621534 | orchestrator | Saturday 11 April 2026 07:50:02 +0000 (0:00:01.145) 0:00:34.799 ******** 2026-04-11 07:50:15.621545 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:50:15.621556 | orchestrator | 2026-04-11 07:50:15.621567 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-11 07:50:15.621578 | orchestrator | Saturday 11 April 2026 07:50:03 +0000 (0:00:01.167) 0:00:35.967 ******** 2026-04-11 07:50:15.621588 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:15.621599 | orchestrator | 2026-04-11 07:50:15.621610 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-11 07:50:15.621621 | orchestrator | Saturday 11 April 2026 07:50:04 +0000 (0:00:01.506) 0:00:37.473 ******** 2026-04-11 07:50:15.621632 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:50:15.621643 | orchestrator | 2026-04-11 07:50:15.621654 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 07:50:15.621665 | orchestrator | Saturday 11 April 2026 07:50:06 +0000 (0:00:01.519) 0:00:38.992 ******** 2026-04-11 07:50:15.621675 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:15.621686 | orchestrator | 2026-04-11 07:50:15.621697 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 07:50:15.621708 | orchestrator | Saturday 11 April 2026 07:50:08 +0000 (0:00:02.176) 0:00:41.169 ******** 2026-04-11 07:50:15.621719 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:15.621730 | orchestrator | 2026-04-11 07:50:15.621741 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 07:50:15.621752 | orchestrator | Saturday 11 April 2026 07:50:09 +0000 (0:00:01.254) 0:00:42.423 ******** 2026-04-11 07:50:15.621762 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:15.621773 | orchestrator | 2026-04-11 07:50:15.621784 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:50:15.621795 | orchestrator | Saturday 11 April 2026 07:50:11 +0000 (0:00:01.256) 0:00:43.680 ******** 2026-04-11 07:50:15.621806 | orchestrator | 2026-04-11 07:50:15.621817 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:50:15.621828 | orchestrator | Saturday 11 April 2026 07:50:11 +0000 (0:00:00.457) 0:00:44.138 ******** 2026-04-11 07:50:15.621838 | orchestrator | 2026-04-11 07:50:15.621849 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:50:15.621860 | orchestrator | Saturday 11 April 2026 07:50:12 +0000 (0:00:00.440) 0:00:44.578 ******** 2026-04-11 07:50:15.621871 | orchestrator | 2026-04-11 07:50:15.621897 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-11 07:50:15.621909 | orchestrator | Saturday 11 April 2026 07:50:12 +0000 (0:00:00.795) 0:00:45.373 ******** 2026-04-11 07:50:15.621920 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:15.621931 | orchestrator | 2026-04-11 07:50:15.621942 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 07:50:15.621953 | orchestrator | Saturday 11 April 2026 07:50:15 +0000 (0:00:02.348) 0:00:47.722 ******** 2026-04-11 07:50:15.621963 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-11 07:50:15.621974 | orchestrator |  "msg": [ 2026-04-11 07:50:15.621986 | orchestrator |  "Validator run completed.", 2026-04-11 07:50:15.621997 | orchestrator |  "You can find the report file here:", 2026-04-11 07:50:15.622008 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-11T07:49:30+00:00-report.json", 2026-04-11 07:50:15.622080 | orchestrator |  "on the following host:", 2026-04-11 07:50:15.622092 | orchestrator |  "testbed-manager" 2026-04-11 07:50:15.622111 | orchestrator |  ] 2026-04-11 07:50:15.622123 | orchestrator | } 2026-04-11 07:50:15.622134 | orchestrator | 2026-04-11 07:50:15.622145 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:50:15.622157 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 07:50:15.622170 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:50:15.622195 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:50:17.195521 | orchestrator | 2026-04-11 07:50:17.195638 | orchestrator | 2026-04-11 07:50:17.195655 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:50:17.195669 | orchestrator | Saturday 11 April 2026 07:50:16 +0000 (0:00:01.628) 0:00:49.350 ******** 2026-04-11 07:50:17.195681 | orchestrator | =============================================================================== 2026-04-11 07:50:17.195691 | orchestrator | Gather list of mgr modules ---------------------------------------------- 3.01s 2026-04-11 07:50:17.195702 | orchestrator | Get timestamp for report file ------------------------------------------- 2.74s 2026-04-11 07:50:17.195713 | orchestrator | Get container info ------------------------------------------------------ 2.57s 2026-04-11 07:50:17.195723 | orchestrator | Write report file ------------------------------------------------------- 2.35s 2026-04-11 07:50:17.195734 | orchestrator | Aggregate test results step one ----------------------------------------- 2.18s 2026-04-11 07:50:17.195745 | orchestrator | Flush handlers ---------------------------------------------------------- 1.92s 2026-04-11 07:50:17.195755 | orchestrator | Prepare test data for container existance test -------------------------- 1.83s 2026-04-11 07:50:17.195766 | orchestrator | Create report output directory ------------------------------------------ 1.79s 2026-04-11 07:50:17.195776 | orchestrator | Flush handlers ---------------------------------------------------------- 1.69s 2026-04-11 07:50:17.195787 | orchestrator | Print report file information ------------------------------------------- 1.63s 2026-04-11 07:50:17.195797 | orchestrator | Set validation result to failed if a test failed ------------------------ 1.52s 2026-04-11 07:50:17.195808 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.51s 2026-04-11 07:50:17.195818 | orchestrator | Set test result to failed if container is missing ----------------------- 1.43s 2026-04-11 07:50:17.195829 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 1.37s 2026-04-11 07:50:17.195839 | orchestrator | Prepare test data ------------------------------------------------------- 1.37s 2026-04-11 07:50:17.195850 | orchestrator | Set test result to passed if container is existing ---------------------- 1.33s 2026-04-11 07:50:17.195860 | orchestrator | Print report file information ------------------------------------------- 1.33s 2026-04-11 07:50:17.195871 | orchestrator | Parse mgr module list from json ----------------------------------------- 1.32s 2026-04-11 07:50:17.195882 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 1.32s 2026-04-11 07:50:17.195892 | orchestrator | Fail due to missing containers ------------------------------------------ 1.28s 2026-04-11 07:50:17.384973 | orchestrator | + osism validate ceph-osds 2026-04-11 07:50:50.041805 | orchestrator | 2026-04-11 07:50:50.041922 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-11 07:50:50.041940 | orchestrator | 2026-04-11 07:50:50.041953 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-11 07:50:50.041965 | orchestrator | Saturday 11 April 2026 07:50:34 +0000 (0:00:01.856) 0:00:01.856 ******** 2026-04-11 07:50:50.041978 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:50.041990 | orchestrator | 2026-04-11 07:50:50.042001 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-11 07:50:50.042013 | orchestrator | Saturday 11 April 2026 07:50:36 +0000 (0:00:02.802) 0:00:04.659 ******** 2026-04-11 07:50:50.042108 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:50.042120 | orchestrator | 2026-04-11 07:50:50.042131 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-11 07:50:50.042142 | orchestrator | Saturday 11 April 2026 07:50:38 +0000 (0:00:01.293) 0:00:05.952 ******** 2026-04-11 07:50:50.042153 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 07:50:50.042164 | orchestrator | 2026-04-11 07:50:50.042174 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-11 07:50:50.042185 | orchestrator | Saturday 11 April 2026 07:50:39 +0000 (0:00:01.729) 0:00:07.682 ******** 2026-04-11 07:50:50.042196 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:50:50.042208 | orchestrator | 2026-04-11 07:50:50.042219 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-11 07:50:50.042230 | orchestrator | Saturday 11 April 2026 07:50:41 +0000 (0:00:01.202) 0:00:08.884 ******** 2026-04-11 07:50:50.042241 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:50:50.042252 | orchestrator | 2026-04-11 07:50:50.042263 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-11 07:50:50.042274 | orchestrator | Saturday 11 April 2026 07:50:42 +0000 (0:00:01.139) 0:00:10.024 ******** 2026-04-11 07:50:50.042284 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:50:50.042295 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:50:50.042305 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:50:50.042316 | orchestrator | 2026-04-11 07:50:50.042327 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-11 07:50:50.042339 | orchestrator | Saturday 11 April 2026 07:50:44 +0000 (0:00:01.895) 0:00:11.919 ******** 2026-04-11 07:50:50.042351 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:50:50.042364 | orchestrator | 2026-04-11 07:50:50.042377 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-11 07:50:50.042390 | orchestrator | Saturday 11 April 2026 07:50:45 +0000 (0:00:01.187) 0:00:13.107 ******** 2026-04-11 07:50:50.042402 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:50:50.042415 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:50:50.042429 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:50:50.042441 | orchestrator | 2026-04-11 07:50:50.042453 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-11 07:50:50.042466 | orchestrator | Saturday 11 April 2026 07:50:46 +0000 (0:00:01.364) 0:00:14.471 ******** 2026-04-11 07:50:50.042503 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:50:50.042516 | orchestrator | 2026-04-11 07:50:50.042541 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 07:50:50.042555 | orchestrator | Saturday 11 April 2026 07:50:48 +0000 (0:00:01.405) 0:00:15.877 ******** 2026-04-11 07:50:50.042566 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:50:50.042577 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:50:50.042588 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:50:50.042598 | orchestrator | 2026-04-11 07:50:50.042609 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-11 07:50:50.042620 | orchestrator | Saturday 11 April 2026 07:50:49 +0000 (0:00:01.374) 0:00:17.251 ******** 2026-04-11 07:50:50.042633 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5c914abd576cd07f3ff3f35d096457069654bdbf6497c56985333586f87a324d', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-11 07:50:50.042647 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aa55df539d6a574da2e2f0929def2fa7235687daab4ef398eb5c752c9e8f8e17', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-11 07:50:50.042658 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd411a93a4ca3d55d8e35fac47230d540e38f45ba878aa7577e9d25b38b82357c', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-11 07:50:50.042679 | orchestrator | skipping: [testbed-node-3] => (item={'id': '22783d1cdac614989a917b3ddbec210ee8c9beb2bd97cb8c8e79ad9e33ec58c1', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-11 07:50:50.042690 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47eb6103fcce9aa973346ec55815dd4d624d8903847b9211636e05f4e83f47a8', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-04-11 07:50:50.042720 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cae38bbf9588df3d7218d43ecd17e2de2c3852bb820ba4402c574b7203582640', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-11 07:50:50.042733 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3c6260f6ee4337d8a91cde57208c436f5e33d8b10b019a6523140540101684fb', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-11 07:50:50.042744 | orchestrator | skipping: [testbed-node-3] => (item={'id': '486d50bff801f16505685e5f233a5c589e7a91f4fc24b122b3ff7eacbb802ac8', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 07:50:50.042766 | orchestrator | skipping: [testbed-node-3] => (item={'id': '38dc8dc5d5b3a0d94f3f85333d37044544d02c03df3c33aeea96d9c3695b59e4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 07:50:50.042778 | orchestrator | skipping: [testbed-node-3] => (item={'id': '06121678924c432d85615b42d8f13c1fc59c89db825f14d7c6c7ad3718dd8fda', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 07:50:50.042789 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb398ca31ef4bbcc716821ec667526a629f8908c395e60bd393ad199904ebd41', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 07:50:50.042801 | orchestrator | ok: [testbed-node-3] => (item={'id': '05a77c8cf7576145f2e91c25ac0cf78aa91922a5c937b5fda74a0b2ae01f265c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-11 07:50:50.042813 | orchestrator | ok: [testbed-node-3] => (item={'id': '2e912d9692fff23e434d59e06470006e4743fd39d3ca1725a1292a61505065ad', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-11 07:50:50.042829 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca65c389eaa4ff0e32dac4dde80e591f44b8fd350c7a6f61ff1afc64680f3156', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:50:50.042841 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ea477fb00e107d604b3e3f4c0503c54a3fd3eadbe6a3a65227faa4272ef9db81', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-11 07:50:50.042852 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78cb008a461923ffb00769c6abd45ffe694a8cece56665c2831125a17b6d7a85', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-11 07:50:50.042869 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0159122f5e10eb00a3aae1ef21955bcc19294d2986d92f6a0584285643bd390a', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:50:50.042881 | orchestrator | skipping: [testbed-node-3] => (item={'id': '631c35ecf0252c5aa4990e57c805e4a5fad769442c9263e5b36b1fb151b7d89c', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:50:50.042892 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd47439f10d0aa8825eeb524d9378ce0a564ea473763a81a9ba153249425ae762', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:50:50.042903 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5de6941ee7bcf85f485869fa54fe692a3b48203f526abf56741d9a5c54ec5038', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-11 07:50:50.042921 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3006980e5c3283fc274fdbd7b6720ed6449d56b80255961c02947ba34f3bf24a', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-11 07:50:50.263105 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd643b98fed85a9dc0e0c059d2d6a3c7c45b34f4dc9709d5efa965836e4aad8b9', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-11 07:50:50.263220 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a767560dd08e7e16d908ea6110edb0a22362a0b990e89461230e8c2fc67e2ec4', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-11 07:50:50.263239 | orchestrator | skipping: [testbed-node-4] => (item={'id': '12675d5bb52f5d29a8c78fbc49aa8334467a1ed67295040a57902afc83b1b5d4', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-04-11 07:50:50.263251 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ba4ec2f118bdcc6d39baa5c693a46eefa23b0815205895f8e96edc92e704f27a', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 57 minutes (healthy)'})  2026-04-11 07:50:50.263263 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3b57bd9f2a45430bed9e2f883c83c55a7edc6f76f23fdefcbcc9e52022846d28', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-11 07:50:50.263275 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c80e419bbac69a4252f7fbe3f6be5d2cac515713a48f778b311b6621c9da9ca6', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 07:50:50.263302 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2fa084a25bb9397756dde1b95e79fdd20b9b4f16245da87e3cdb1925c5254807', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 07:50:50.263315 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dd4b11239325f5a633d52e889613b5788c9ef45ac8c4498e69618f52a2b58f6b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 07:50:50.263349 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6dbff6dd7a2d5669edf598962046e6797b2f3c6b8f6b967b71384b26b9ce9446', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 07:50:50.263362 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ab07dfa48603287599065efc85711b0526b3b80daebfd558e625be034409a470', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-11 07:50:50.263374 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b6d17e3f360105cf3f6a91381a0586bad9eaa09ab253926ec0a012122980786e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-11 07:50:50.263385 | orchestrator | skipping: [testbed-node-4] => (item={'id': '49b61092b42ec63f5e690a9fae739466ede27004be57dbf0add18684270dc892', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:50:50.263396 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1a44059cf53d9ded97e6717b00d2599adf4382a0f22f05c7a22dddc90acc4dc1', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-11 07:50:50.263407 | orchestrator | skipping: [testbed-node-4] => (item={'id': '815a6f517a799d7ce0d7265f0552132f8b1cb13f181543e5342bdb890cf6d061', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-11 07:50:50.263434 | orchestrator | skipping: [testbed-node-4] => (item={'id': '194badc5e749d907e88d0eb1bccedc7b1efaf8dd0079ff1c309a44ce2896c36f', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:50:50.263446 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f8068da0e9f69bf39757501e5e627d1d6f6582342705ff0249518e8f44f34ca8', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:50:50.263457 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fdd54e19d2d51930c1261fc863b3a06b36ee5ab54025dd1d180498ba304379b4', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:50:50.263469 | orchestrator | skipping: [testbed-node-5] => (item={'id': '47dda7d2e095863dbf85e21ab7b9f99c0e6495882077ef1f33ad5ffd18568c62', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-11 07:50:50.263523 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c8e7364fc16c015804bdca15ed0fb4b5d86a2e03c87a5293b222b66f185117cd', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-11 07:50:50.263534 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ba92a338fe02edf6074eb3d18fae0ffba72a6703e477265ce53714d2b54c6dae', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-11 07:50:50.263545 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dd74b92ecc320c0acbd51e30d4d2cbb17d4135e4473bce95fc59c0c0f68772b9', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-11 07:50:50.263556 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4548f1a316ed4c4b43550e3904ae665765d7052c287307a1694f722d3dbab1d2', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-04-11 07:50:50.263576 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aa59799b258a39736c6e8f86307b74db952baf87868aa4105df009d655f90bfb', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 57 minutes (healthy)'})  2026-04-11 07:50:50.263587 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0d116ad230acd15f0b835d903554d3c3a80e95af896d51eb508332b5bb4e0b5c', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-11 07:50:50.263598 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b519f30c25ddfcbc88b36ca2a33c6a53c84162e6dcc331a153a7ffc142c2e99b', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-11 07:50:50.263609 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fb25f94747f5fb4b09c0ecf27cb14003c163f9747f54a4d064c42190756501c8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 07:50:50.263628 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9e7caaf39549a2fc00b0b6f132e868b42389ab9b8422ad48ef09b5bb94e6f096', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-11 07:50:50.263639 | orchestrator | skipping: [testbed-node-5] => (item={'id': '74bb74adb02d1a1dd00a88d4517c87adb7ab0d4ed46ffcd3e0be7fe7ecedddce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-11 07:50:50.263652 | orchestrator | ok: [testbed-node-5] => (item={'id': 'e6909a07830c823484b8d0d98d79c99e7f516733802755fd7ef40d5b56593f03', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-11 07:50:50.263674 | orchestrator | ok: [testbed-node-5] => (item={'id': '2bcae401e8b4a7c9cdba58ed9b700cb700603524d186014b27895afea7ccfcdc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-11 07:51:27.226578 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3546f8ca3a054c4e78576cec9cf872d029340da7af60711ccbc5001c2bc4c24a', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:51:27.226696 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5846aaf755dfd337f64be7177af7cd29db8fd75a22d3da5439d7957fccb3be59', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-11 07:51:27.226714 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ed0b95d9d9ca364fc095577e8d03579a99e79ee505c074cf84a580380b6c7c5a', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-11 07:51:27.226727 | orchestrator | skipping: [testbed-node-5] => (item={'id': '14acec242687a9354733a20b8d650526732c4228c75e07a6ec8ac9b815d5693f', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:51:27.226739 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6c852cdce82e53e34779c3ea6b7f60c8cfb38dabc47110765098f8e77a5ba5a2', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:51:27.226774 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a3f31335852e3f10ba97f4961823e096e6004876da014d006f59c7de101be37', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-11 07:51:27.226786 | orchestrator | 2026-04-11 07:51:27.226813 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-11 07:51:27.226841 | orchestrator | Saturday 11 April 2026 07:50:51 +0000 (0:00:01.907) 0:00:19.159 ******** 2026-04-11 07:51:27.226853 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.226864 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:27.226887 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:27.226898 | orchestrator | 2026-04-11 07:51:27.226909 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-11 07:51:27.226920 | orchestrator | Saturday 11 April 2026 07:50:52 +0000 (0:00:01.323) 0:00:20.483 ******** 2026-04-11 07:51:27.226930 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.226942 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:51:27.226952 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:51:27.226963 | orchestrator | 2026-04-11 07:51:27.226975 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-11 07:51:27.226986 | orchestrator | Saturday 11 April 2026 07:50:54 +0000 (0:00:01.376) 0:00:21.859 ******** 2026-04-11 07:51:27.226996 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.227007 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:27.227018 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:27.227028 | orchestrator | 2026-04-11 07:51:27.227039 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 07:51:27.227050 | orchestrator | Saturday 11 April 2026 07:50:55 +0000 (0:00:01.573) 0:00:23.432 ******** 2026-04-11 07:51:27.227060 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.227071 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:27.227084 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:27.227097 | orchestrator | 2026-04-11 07:51:27.227109 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-11 07:51:27.227122 | orchestrator | Saturday 11 April 2026 07:50:57 +0000 (0:00:01.531) 0:00:24.964 ******** 2026-04-11 07:51:27.227134 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-11 07:51:27.227147 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-11 07:51:27.227159 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.227172 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-11 07:51:27.227184 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-11 07:51:27.227197 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:51:27.227209 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-11 07:51:27.227221 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-11 07:51:27.227233 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:51:27.227245 | orchestrator | 2026-04-11 07:51:27.227258 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-11 07:51:27.227270 | orchestrator | Saturday 11 April 2026 07:50:58 +0000 (0:00:01.383) 0:00:26.347 ******** 2026-04-11 07:51:27.227282 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.227294 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:27.227306 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:27.227319 | orchestrator | 2026-04-11 07:51:27.227331 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-11 07:51:27.227344 | orchestrator | Saturday 11 April 2026 07:50:59 +0000 (0:00:01.324) 0:00:27.672 ******** 2026-04-11 07:51:27.227380 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.227394 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:51:27.227406 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:51:27.227418 | orchestrator | 2026-04-11 07:51:27.227431 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-11 07:51:27.227443 | orchestrator | Saturday 11 April 2026 07:51:01 +0000 (0:00:01.415) 0:00:29.087 ******** 2026-04-11 07:51:27.227454 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.227464 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:51:27.227475 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:51:27.227486 | orchestrator | 2026-04-11 07:51:27.227496 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-11 07:51:27.227549 | orchestrator | Saturday 11 April 2026 07:51:02 +0000 (0:00:01.356) 0:00:30.444 ******** 2026-04-11 07:51:27.227560 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.227570 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:27.227581 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:27.227592 | orchestrator | 2026-04-11 07:51:27.227602 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 07:51:27.227613 | orchestrator | Saturday 11 April 2026 07:51:04 +0000 (0:00:01.403) 0:00:31.848 ******** 2026-04-11 07:51:27.227624 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.227634 | orchestrator | 2026-04-11 07:51:27.227645 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 07:51:27.227656 | orchestrator | Saturday 11 April 2026 07:51:05 +0000 (0:00:01.286) 0:00:33.134 ******** 2026-04-11 07:51:27.227666 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.227677 | orchestrator | 2026-04-11 07:51:27.227687 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 07:51:27.227698 | orchestrator | Saturday 11 April 2026 07:51:06 +0000 (0:00:01.220) 0:00:34.354 ******** 2026-04-11 07:51:27.227709 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.227719 | orchestrator | 2026-04-11 07:51:27.227730 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:51:27.227740 | orchestrator | Saturday 11 April 2026 07:51:07 +0000 (0:00:01.245) 0:00:35.599 ******** 2026-04-11 07:51:27.227751 | orchestrator | 2026-04-11 07:51:27.227762 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:51:27.227773 | orchestrator | Saturday 11 April 2026 07:51:08 +0000 (0:00:00.452) 0:00:36.052 ******** 2026-04-11 07:51:27.227783 | orchestrator | 2026-04-11 07:51:27.227794 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:51:27.227810 | orchestrator | Saturday 11 April 2026 07:51:08 +0000 (0:00:00.593) 0:00:36.645 ******** 2026-04-11 07:51:27.227821 | orchestrator | 2026-04-11 07:51:27.227831 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 07:51:27.227842 | orchestrator | Saturday 11 April 2026 07:51:09 +0000 (0:00:00.768) 0:00:37.414 ******** 2026-04-11 07:51:27.227853 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.227863 | orchestrator | 2026-04-11 07:51:27.227874 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-11 07:51:27.227884 | orchestrator | Saturday 11 April 2026 07:51:11 +0000 (0:00:01.350) 0:00:38.764 ******** 2026-04-11 07:51:27.227895 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.227906 | orchestrator | 2026-04-11 07:51:27.227916 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 07:51:27.227927 | orchestrator | Saturday 11 April 2026 07:51:12 +0000 (0:00:01.245) 0:00:40.010 ******** 2026-04-11 07:51:27.227937 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.227948 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:27.227959 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:27.227969 | orchestrator | 2026-04-11 07:51:27.227980 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-11 07:51:27.227991 | orchestrator | Saturday 11 April 2026 07:51:13 +0000 (0:00:01.348) 0:00:41.359 ******** 2026-04-11 07:51:27.228009 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.228019 | orchestrator | 2026-04-11 07:51:27.228030 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-11 07:51:27.228041 | orchestrator | Saturday 11 April 2026 07:51:14 +0000 (0:00:01.284) 0:00:42.644 ******** 2026-04-11 07:51:27.228051 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-11 07:51:27.228062 | orchestrator | 2026-04-11 07:51:27.228073 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-11 07:51:27.228083 | orchestrator | Saturday 11 April 2026 07:51:18 +0000 (0:00:03.393) 0:00:46.037 ******** 2026-04-11 07:51:27.228094 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.228105 | orchestrator | 2026-04-11 07:51:27.228115 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-11 07:51:27.228126 | orchestrator | Saturday 11 April 2026 07:51:19 +0000 (0:00:01.214) 0:00:47.252 ******** 2026-04-11 07:51:27.228137 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.228147 | orchestrator | 2026-04-11 07:51:27.228158 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-11 07:51:27.228169 | orchestrator | Saturday 11 April 2026 07:51:20 +0000 (0:00:01.339) 0:00:48.592 ******** 2026-04-11 07:51:27.228179 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:27.228190 | orchestrator | 2026-04-11 07:51:27.228200 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-11 07:51:27.228211 | orchestrator | Saturday 11 April 2026 07:51:22 +0000 (0:00:01.167) 0:00:49.760 ******** 2026-04-11 07:51:27.228222 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.228232 | orchestrator | 2026-04-11 07:51:27.228243 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 07:51:27.228253 | orchestrator | Saturday 11 April 2026 07:51:23 +0000 (0:00:01.210) 0:00:50.971 ******** 2026-04-11 07:51:27.228264 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:27.228274 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:27.228285 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:27.228296 | orchestrator | 2026-04-11 07:51:27.228306 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-11 07:51:27.228317 | orchestrator | Saturday 11 April 2026 07:51:24 +0000 (0:00:01.372) 0:00:52.343 ******** 2026-04-11 07:51:27.228328 | orchestrator | changed: [testbed-node-3] 2026-04-11 07:51:27.228339 | orchestrator | changed: [testbed-node-4] 2026-04-11 07:51:27.228356 | orchestrator | changed: [testbed-node-5] 2026-04-11 07:51:58.643277 | orchestrator | 2026-04-11 07:51:58.643361 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-11 07:51:58.643370 | orchestrator | Saturday 11 April 2026 07:51:28 +0000 (0:00:03.671) 0:00:56.015 ******** 2026-04-11 07:51:58.643376 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:58.643382 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:58.643387 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:58.643393 | orchestrator | 2026-04-11 07:51:58.643398 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-11 07:51:58.643404 | orchestrator | Saturday 11 April 2026 07:51:29 +0000 (0:00:01.330) 0:00:57.345 ******** 2026-04-11 07:51:58.643409 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:58.643414 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:58.643419 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:58.643424 | orchestrator | 2026-04-11 07:51:58.643429 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-11 07:51:58.643434 | orchestrator | Saturday 11 April 2026 07:51:31 +0000 (0:00:01.803) 0:00:59.148 ******** 2026-04-11 07:51:58.643439 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:58.643445 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:51:58.643450 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:51:58.643455 | orchestrator | 2026-04-11 07:51:58.643460 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-11 07:51:58.643465 | orchestrator | Saturday 11 April 2026 07:51:32 +0000 (0:00:01.315) 0:01:00.464 ******** 2026-04-11 07:51:58.643486 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:58.643492 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:58.643497 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:58.643502 | orchestrator | 2026-04-11 07:51:58.643507 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-11 07:51:58.643512 | orchestrator | Saturday 11 April 2026 07:51:34 +0000 (0:00:01.352) 0:01:01.817 ******** 2026-04-11 07:51:58.643517 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:58.643563 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:51:58.643568 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:51:58.643573 | orchestrator | 2026-04-11 07:51:58.643578 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-11 07:51:58.643583 | orchestrator | Saturday 11 April 2026 07:51:35 +0000 (0:00:01.295) 0:01:03.113 ******** 2026-04-11 07:51:58.643589 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:58.643594 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:51:58.643599 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:51:58.643604 | orchestrator | 2026-04-11 07:51:58.643620 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-11 07:51:58.643625 | orchestrator | Saturday 11 April 2026 07:51:36 +0000 (0:00:01.342) 0:01:04.455 ******** 2026-04-11 07:51:58.643630 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:58.643635 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:58.643640 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:58.643645 | orchestrator | 2026-04-11 07:51:58.643650 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-11 07:51:58.643655 | orchestrator | Saturday 11 April 2026 07:51:38 +0000 (0:00:01.554) 0:01:06.010 ******** 2026-04-11 07:51:58.643660 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:58.643665 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:58.643670 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:58.643675 | orchestrator | 2026-04-11 07:51:58.643680 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-11 07:51:58.643686 | orchestrator | Saturday 11 April 2026 07:51:39 +0000 (0:00:01.528) 0:01:07.538 ******** 2026-04-11 07:51:58.643691 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:58.643696 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:58.643701 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:58.643706 | orchestrator | 2026-04-11 07:51:58.643711 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-11 07:51:58.643717 | orchestrator | Saturday 11 April 2026 07:51:41 +0000 (0:00:01.550) 0:01:09.088 ******** 2026-04-11 07:51:58.643722 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:58.643727 | orchestrator | skipping: [testbed-node-4] 2026-04-11 07:51:58.643732 | orchestrator | skipping: [testbed-node-5] 2026-04-11 07:51:58.643737 | orchestrator | 2026-04-11 07:51:58.643742 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-11 07:51:58.643747 | orchestrator | Saturday 11 April 2026 07:51:42 +0000 (0:00:01.370) 0:01:10.459 ******** 2026-04-11 07:51:58.643752 | orchestrator | ok: [testbed-node-3] 2026-04-11 07:51:58.643757 | orchestrator | ok: [testbed-node-4] 2026-04-11 07:51:58.643762 | orchestrator | ok: [testbed-node-5] 2026-04-11 07:51:58.643767 | orchestrator | 2026-04-11 07:51:58.643772 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-11 07:51:58.643777 | orchestrator | Saturday 11 April 2026 07:51:44 +0000 (0:00:01.363) 0:01:11.822 ******** 2026-04-11 07:51:58.643782 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 07:51:58.643787 | orchestrator | 2026-04-11 07:51:58.643792 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-11 07:51:58.643797 | orchestrator | Saturday 11 April 2026 07:51:45 +0000 (0:00:01.327) 0:01:13.150 ******** 2026-04-11 07:51:58.643802 | orchestrator | skipping: [testbed-node-3] 2026-04-11 07:51:58.643807 | orchestrator | 2026-04-11 07:51:58.643813 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-11 07:51:58.643823 | orchestrator | Saturday 11 April 2026 07:51:46 +0000 (0:00:01.258) 0:01:14.409 ******** 2026-04-11 07:51:58.643828 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 07:51:58.643833 | orchestrator | 2026-04-11 07:51:58.643838 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-11 07:51:58.643845 | orchestrator | Saturday 11 April 2026 07:51:49 +0000 (0:00:02.905) 0:01:17.315 ******** 2026-04-11 07:51:58.643850 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 07:51:58.643856 | orchestrator | 2026-04-11 07:51:58.643862 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-11 07:51:58.643867 | orchestrator | Saturday 11 April 2026 07:51:51 +0000 (0:00:01.551) 0:01:18.867 ******** 2026-04-11 07:51:58.643873 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 07:51:58.643879 | orchestrator | 2026-04-11 07:51:58.643897 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:51:58.643903 | orchestrator | Saturday 11 April 2026 07:51:52 +0000 (0:00:01.298) 0:01:20.166 ******** 2026-04-11 07:51:58.643909 | orchestrator | 2026-04-11 07:51:58.643914 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:51:58.643920 | orchestrator | Saturday 11 April 2026 07:51:52 +0000 (0:00:00.441) 0:01:20.607 ******** 2026-04-11 07:51:58.643926 | orchestrator | 2026-04-11 07:51:58.643932 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-11 07:51:58.643937 | orchestrator | Saturday 11 April 2026 07:51:53 +0000 (0:00:00.480) 0:01:21.088 ******** 2026-04-11 07:51:58.643943 | orchestrator | 2026-04-11 07:51:58.643949 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-11 07:51:58.643954 | orchestrator | Saturday 11 April 2026 07:51:54 +0000 (0:00:00.824) 0:01:21.913 ******** 2026-04-11 07:51:58.643960 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-11 07:51:58.643966 | orchestrator | 2026-04-11 07:51:58.643971 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-11 07:51:58.643977 | orchestrator | Saturday 11 April 2026 07:51:56 +0000 (0:00:02.393) 0:01:24.306 ******** 2026-04-11 07:51:58.643983 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-11 07:51:58.643989 | orchestrator |  "msg": [ 2026-04-11 07:51:58.643995 | orchestrator |  "Validator run completed.", 2026-04-11 07:51:58.644002 | orchestrator |  "You can find the report file here:", 2026-04-11 07:51:58.644008 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-11T07:50:35+00:00-report.json", 2026-04-11 07:51:58.644014 | orchestrator |  "on the following host:", 2026-04-11 07:51:58.644021 | orchestrator |  "testbed-manager" 2026-04-11 07:51:58.644026 | orchestrator |  ] 2026-04-11 07:51:58.644032 | orchestrator | } 2026-04-11 07:51:58.644038 | orchestrator | 2026-04-11 07:51:58.644044 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:51:58.644050 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-11 07:51:58.644056 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 07:51:58.644065 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-11 07:51:58.644071 | orchestrator | 2026-04-11 07:51:58.644076 | orchestrator | 2026-04-11 07:51:58.644082 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:51:58.644088 | orchestrator | Saturday 11 April 2026 07:51:58 +0000 (0:00:01.688) 0:01:25.995 ******** 2026-04-11 07:51:58.644094 | orchestrator | =============================================================================== 2026-04-11 07:51:58.644099 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 3.67s 2026-04-11 07:51:58.644109 | orchestrator | Get ceph osd tree ------------------------------------------------------- 3.39s 2026-04-11 07:51:58.644115 | orchestrator | Aggregate test results step one ----------------------------------------- 2.91s 2026-04-11 07:51:58.644121 | orchestrator | Get timestamp for report file ------------------------------------------- 2.80s 2026-04-11 07:51:58.644126 | orchestrator | Write report file ------------------------------------------------------- 2.39s 2026-04-11 07:51:58.644132 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 1.91s 2026-04-11 07:51:58.644138 | orchestrator | Calculate OSD devices for each host ------------------------------------- 1.90s 2026-04-11 07:51:58.644144 | orchestrator | Flush handlers ---------------------------------------------------------- 1.81s 2026-04-11 07:51:58.644149 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 1.80s 2026-04-11 07:51:58.644155 | orchestrator | Flush handlers ---------------------------------------------------------- 1.75s 2026-04-11 07:51:58.644161 | orchestrator | Create report output directory ------------------------------------------ 1.73s 2026-04-11 07:51:58.644167 | orchestrator | Print report file information ------------------------------------------- 1.69s 2026-04-11 07:51:58.644172 | orchestrator | Set test result to passed if count matches ------------------------------ 1.57s 2026-04-11 07:51:58.644178 | orchestrator | Prepare test data ------------------------------------------------------- 1.55s 2026-04-11 07:51:58.644184 | orchestrator | Aggregate test results step two ----------------------------------------- 1.55s 2026-04-11 07:51:58.644190 | orchestrator | Calculate sub test expression results ----------------------------------- 1.55s 2026-04-11 07:51:58.644196 | orchestrator | Prepare test data ------------------------------------------------------- 1.53s 2026-04-11 07:51:58.644201 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.53s 2026-04-11 07:51:58.644208 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 1.42s 2026-04-11 07:51:58.644213 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 1.41s 2026-04-11 07:51:58.846493 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-11 07:51:58.854802 | orchestrator | + set -e 2026-04-11 07:51:58.854874 | orchestrator | + source /opt/manager-vars.sh 2026-04-11 07:51:58.854887 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-11 07:51:58.854898 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-11 07:51:58.854909 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-11 07:51:58.854919 | orchestrator | ++ CEPH_VERSION=reef 2026-04-11 07:51:58.854930 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-11 07:51:58.854942 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-11 07:51:58.854953 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-11 07:51:58.854964 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-11 07:51:58.854974 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-11 07:51:58.854985 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-11 07:51:58.854996 | orchestrator | ++ export ARA=false 2026-04-11 07:51:58.855006 | orchestrator | ++ ARA=false 2026-04-11 07:51:58.855017 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-11 07:51:58.855027 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-11 07:51:58.855038 | orchestrator | ++ export TEMPEST=false 2026-04-11 07:51:58.855048 | orchestrator | ++ TEMPEST=false 2026-04-11 07:51:58.855059 | orchestrator | ++ export IS_ZUUL=true 2026-04-11 07:51:58.855069 | orchestrator | ++ IS_ZUUL=true 2026-04-11 07:51:58.855080 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 07:51:58.855091 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.48 2026-04-11 07:51:58.855101 | orchestrator | ++ export EXTERNAL_API=false 2026-04-11 07:51:58.855112 | orchestrator | ++ EXTERNAL_API=false 2026-04-11 07:51:58.855122 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-11 07:51:58.855133 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-11 07:51:58.855144 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-11 07:51:58.855155 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-11 07:51:58.855165 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-11 07:51:58.855176 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-11 07:51:58.855186 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-11 07:51:58.855197 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-11 07:51:58.855207 | orchestrator | + source /etc/os-release 2026-04-11 07:51:58.855218 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-11 07:51:58.855254 | orchestrator | ++ NAME=Ubuntu 2026-04-11 07:51:58.855265 | orchestrator | ++ VERSION_ID=24.04 2026-04-11 07:51:58.855276 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-11 07:51:58.855287 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-11 07:51:58.855297 | orchestrator | ++ ID=ubuntu 2026-04-11 07:51:58.855308 | orchestrator | ++ ID_LIKE=debian 2026-04-11 07:51:58.855319 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-11 07:51:58.855329 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-11 07:51:58.855340 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-11 07:51:58.855351 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-11 07:51:58.855363 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-11 07:51:58.855374 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-11 07:51:58.855384 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-11 07:51:58.855398 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-11 07:51:58.855412 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-11 07:51:58.880082 | orchestrator | 2026-04-11 07:51:58.880152 | orchestrator | # Status of Elasticsearch 2026-04-11 07:51:58.880165 | orchestrator | 2026-04-11 07:51:58.880177 | orchestrator | + pushd /opt/configuration/contrib 2026-04-11 07:51:58.880188 | orchestrator | + echo 2026-04-11 07:51:58.880200 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-11 07:51:58.880210 | orchestrator | + echo 2026-04-11 07:51:58.880221 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-11 07:51:59.074152 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-11 07:51:59.074266 | orchestrator | 2026-04-11 07:51:59.074283 | orchestrator | # Status of MariaDB 2026-04-11 07:51:59.074297 | orchestrator | 2026-04-11 07:51:59.074308 | orchestrator | + echo 2026-04-11 07:51:59.074319 | orchestrator | + echo '# Status of MariaDB' 2026-04-11 07:51:59.074330 | orchestrator | + echo 2026-04-11 07:51:59.074341 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-11 07:51:59.133133 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-11 07:51:59.133218 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-11 07:51:59.133227 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-11 07:51:59.133236 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-11 07:51:59.202621 | orchestrator | Reading package lists... 2026-04-11 07:51:59.555419 | orchestrator | Building dependency tree... 2026-04-11 07:51:59.557483 | orchestrator | Reading state information... 2026-04-11 07:51:59.938651 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-11 07:51:59.938756 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2026-04-11 07:52:00.574746 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-11 07:52:00.574848 | orchestrator | 2026-04-11 07:52:00.574869 | orchestrator | # Status of Prometheus 2026-04-11 07:52:00.574883 | orchestrator | 2026-04-11 07:52:00.574894 | orchestrator | + echo 2026-04-11 07:52:00.574907 | orchestrator | + echo '# Status of Prometheus' 2026-04-11 07:52:00.574920 | orchestrator | + echo 2026-04-11 07:52:00.574933 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-11 07:52:00.629875 | orchestrator | Unauthorized 2026-04-11 07:52:00.630642 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-11 07:52:00.690143 | orchestrator | Unauthorized 2026-04-11 07:52:00.693256 | orchestrator | 2026-04-11 07:52:00.693302 | orchestrator | # Status of RabbitMQ 2026-04-11 07:52:00.693316 | orchestrator | 2026-04-11 07:52:00.693328 | orchestrator | + echo 2026-04-11 07:52:00.693339 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-11 07:52:00.693350 | orchestrator | + echo 2026-04-11 07:52:00.694518 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-11 07:52:00.759134 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-11 07:52:00.759258 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-11 07:52:00.759290 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-11 07:52:01.267291 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-11 07:52:01.278220 | orchestrator | 2026-04-11 07:52:01.278337 | orchestrator | # Status of Redis 2026-04-11 07:52:01.278361 | orchestrator | 2026-04-11 07:52:01.278373 | orchestrator | + echo 2026-04-11 07:52:01.278384 | orchestrator | + echo '# Status of Redis' 2026-04-11 07:52:01.278396 | orchestrator | + echo 2026-04-11 07:52:01.278408 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-11 07:52:01.288602 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002474s;;;0.000000;10.000000 2026-04-11 07:52:01.289286 | orchestrator | 2026-04-11 07:52:01.289380 | orchestrator | # Create backup of MariaDB database 2026-04-11 07:52:01.289396 | orchestrator | + popd 2026-04-11 07:52:01.289408 | orchestrator | + echo 2026-04-11 07:52:01.289419 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-11 07:52:01.289430 | orchestrator | 2026-04-11 07:52:01.289441 | orchestrator | + echo 2026-04-11 07:52:01.289452 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-11 07:52:02.620498 | orchestrator | 2026-04-11 07:52:02 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-11 07:52:02.685213 | orchestrator | 2026-04-11 07:52:02 | INFO  | Task 5c564a34-b3dd-4117-b3e4-fb7d34977e61 (mariadb_backup) was prepared for execution. 2026-04-11 07:52:02.685319 | orchestrator | 2026-04-11 07:52:02 | INFO  | It takes a moment until task 5c564a34-b3dd-4117-b3e4-fb7d34977e61 (mariadb_backup) has been started and output is visible here. 2026-04-11 07:54:09.303495 | orchestrator | 2026-04-11 07:54:09.303657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-11 07:54:09.303677 | orchestrator | 2026-04-11 07:54:09.303689 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-11 07:54:09.303701 | orchestrator | Saturday 11 April 2026 07:52:07 +0000 (0:00:01.458) 0:00:01.458 ******** 2026-04-11 07:54:09.303712 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:54:09.303724 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:54:09.303735 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:54:09.303745 | orchestrator | 2026-04-11 07:54:09.303756 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-11 07:54:09.303794 | orchestrator | Saturday 11 April 2026 07:52:09 +0000 (0:00:01.870) 0:00:03.328 ******** 2026-04-11 07:54:09.303813 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-11 07:54:09.303831 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-11 07:54:09.303850 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-11 07:54:09.303869 | orchestrator | 2026-04-11 07:54:09.303886 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-11 07:54:09.303897 | orchestrator | 2026-04-11 07:54:09.303908 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-11 07:54:09.303919 | orchestrator | Saturday 11 April 2026 07:52:12 +0000 (0:00:02.758) 0:00:06.086 ******** 2026-04-11 07:54:09.303930 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-11 07:54:09.303941 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-11 07:54:09.303952 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-11 07:54:09.303963 | orchestrator | 2026-04-11 07:54:09.303973 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-11 07:54:09.303984 | orchestrator | Saturday 11 April 2026 07:52:14 +0000 (0:00:02.311) 0:00:08.398 ******** 2026-04-11 07:54:09.303996 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-11 07:54:09.304007 | orchestrator | 2026-04-11 07:54:09.304018 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-11 07:54:09.304029 | orchestrator | Saturday 11 April 2026 07:52:16 +0000 (0:00:01.879) 0:00:10.277 ******** 2026-04-11 07:54:09.304042 | orchestrator | ok: [testbed-node-0] 2026-04-11 07:54:09.304055 | orchestrator | ok: [testbed-node-1] 2026-04-11 07:54:09.304067 | orchestrator | ok: [testbed-node-2] 2026-04-11 07:54:09.304080 | orchestrator | 2026-04-11 07:54:09.304114 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-11 07:54:09.304127 | orchestrator | Saturday 11 April 2026 07:52:21 +0000 (0:00:04.886) 0:00:15.164 ******** 2026-04-11 07:54:09.304140 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:54:09.304154 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:54:09.304166 | orchestrator | changed: [testbed-node-0] 2026-04-11 07:54:09.304179 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-11 07:54:09.304191 | orchestrator | 2026-04-11 07:54:09.304209 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-11 07:54:09.304222 | orchestrator | skipping: no hosts matched 2026-04-11 07:54:09.304234 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-11 07:54:09.304247 | orchestrator | 2026-04-11 07:54:09.304260 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-11 07:54:09.304273 | orchestrator | skipping: no hosts matched 2026-04-11 07:54:09.304285 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-11 07:54:09.304298 | orchestrator | mariadb_bootstrap_restart 2026-04-11 07:54:09.304310 | orchestrator | 2026-04-11 07:54:09.304323 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-11 07:54:09.304335 | orchestrator | skipping: no hosts matched 2026-04-11 07:54:09.304347 | orchestrator | 2026-04-11 07:54:09.304360 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-11 07:54:09.304372 | orchestrator | 2026-04-11 07:54:09.304383 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-11 07:54:09.304393 | orchestrator | Saturday 11 April 2026 07:54:05 +0000 (0:01:43.886) 0:01:59.050 ******** 2026-04-11 07:54:09.304404 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:54:09.304414 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:54:09.304425 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:54:09.304435 | orchestrator | 2026-04-11 07:54:09.304446 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-11 07:54:09.304457 | orchestrator | Saturday 11 April 2026 07:54:06 +0000 (0:00:01.567) 0:02:00.618 ******** 2026-04-11 07:54:09.304467 | orchestrator | skipping: [testbed-node-0] 2026-04-11 07:54:09.304478 | orchestrator | skipping: [testbed-node-1] 2026-04-11 07:54:09.304488 | orchestrator | skipping: [testbed-node-2] 2026-04-11 07:54:09.304499 | orchestrator | 2026-04-11 07:54:09.304509 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:54:09.304521 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-11 07:54:09.304532 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 07:54:09.304544 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-11 07:54:09.304554 | orchestrator | 2026-04-11 07:54:09.304565 | orchestrator | 2026-04-11 07:54:09.304575 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:54:09.304586 | orchestrator | Saturday 11 April 2026 07:54:08 +0000 (0:00:02.280) 0:02:02.899 ******** 2026-04-11 07:54:09.304630 | orchestrator | =============================================================================== 2026-04-11 07:54:09.304641 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 103.89s 2026-04-11 07:54:09.304671 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 4.89s 2026-04-11 07:54:09.304683 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.76s 2026-04-11 07:54:09.304693 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 2.31s 2026-04-11 07:54:09.304704 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 2.28s 2026-04-11 07:54:09.304724 | orchestrator | mariadb : include_tasks ------------------------------------------------- 1.88s 2026-04-11 07:54:09.304735 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.87s 2026-04-11 07:54:09.304746 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 1.57s 2026-04-11 07:54:09.504002 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-11 07:54:09.511746 | orchestrator | + set -e 2026-04-11 07:54:09.511814 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-11 07:54:09.511829 | orchestrator | ++ export INTERACTIVE=false 2026-04-11 07:54:09.511841 | orchestrator | ++ INTERACTIVE=false 2026-04-11 07:54:09.511852 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-11 07:54:09.511863 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-11 07:54:09.511875 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-11 07:54:09.513042 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-11 07:54:09.520306 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-11 07:54:09.520363 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-11 07:54:09.520375 | orchestrator | 2026-04-11 07:54:09.520387 | orchestrator | # OpenStack endpoints 2026-04-11 07:54:09.520397 | orchestrator | 2026-04-11 07:54:09.520409 | orchestrator | + export OS_CLOUD=admin 2026-04-11 07:54:09.520420 | orchestrator | + OS_CLOUD=admin 2026-04-11 07:54:09.520431 | orchestrator | + echo 2026-04-11 07:54:09.520442 | orchestrator | + echo '# OpenStack endpoints' 2026-04-11 07:54:09.520452 | orchestrator | + echo 2026-04-11 07:54:09.520463 | orchestrator | + openstack endpoint list 2026-04-11 07:54:13.422496 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-11 07:54:13.422664 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-11 07:54:13.422682 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-11 07:54:13.422694 | orchestrator | | 1ad018f322494f91af0b9043fcc2c9a0 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-11 07:54:13.422704 | orchestrator | | 228105be0b2948e9a23e79a448f9ca37 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-11 07:54:13.422733 | orchestrator | | 2308b568855b42b08dbbcd155749cfa5 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-11 07:54:13.422745 | orchestrator | | 2cbbb8cf45c4493f95d3d0aed6fef3b9 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-11 07:54:13.422755 | orchestrator | | 32ecba1022b24036bfc3b3d5c3a6489d | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-11 07:54:13.422766 | orchestrator | | 384912f39ddb467eb93683232ee4d4c2 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-11 07:54:13.422776 | orchestrator | | 39afd87d874a4f60828340f84b544426 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-11 07:54:13.422787 | orchestrator | | 409e1d36798b44c0b86658159544b7bd | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-11 07:54:13.422797 | orchestrator | | 40bb89e8313343468a3d734dd7cbd5c6 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-11 07:54:13.422808 | orchestrator | | 644dce61aff64c7f90dcec113dea21f1 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-11 07:54:13.422847 | orchestrator | | 67929017d01340b7ab1d62629c678027 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-11 07:54:13.422859 | orchestrator | | 6f82c71c52d04ba594e90c3b48bbfd1d | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-11 07:54:13.422870 | orchestrator | | 74797a99ce4e4c9795e75bd01a55dd7c | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-11 07:54:13.422880 | orchestrator | | 820aee162a8e453b90b5ef6bd134f0b9 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-11 07:54:13.422891 | orchestrator | | 82971e6268294ddfa8217b5dfde9cd2a | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-11 07:54:13.422901 | orchestrator | | 8e0e8e3106cc4af6ad76d548d408d69b | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-11 07:54:13.422912 | orchestrator | | 95ab37693d804a7cbaac4e1eba557333 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-11 07:54:13.422922 | orchestrator | | 9ae5c8d502f545f886cbb7c673b4b3c2 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-11 07:54:13.422932 | orchestrator | | 9d33585912a74ce99fbee9f52f7a0b8c | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-11 07:54:13.422943 | orchestrator | | b4c93dc2ab394baabb0db3db5edd0f8c | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-11 07:54:13.422999 | orchestrator | | b7b597b564464a2cbe22c01ec7114417 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-11 07:54:13.423013 | orchestrator | | bc3e5031a97c44f0b0510cc6f4c29a75 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-11 07:54:13.423026 | orchestrator | | c08ad837e2b9490094423685d0324f2b | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-11 07:54:13.423040 | orchestrator | | d2c301245cb04ec9acdbb9d1f4c32217 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-11 07:54:13.423052 | orchestrator | | d6c5d33ee149402a9cc0f7552fbe7e4e | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-11 07:54:13.423070 | orchestrator | | f2bcd0859bfc44e0941a3e4ffbadc02b | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-11 07:54:13.423083 | orchestrator | | f8f6a0a459ab4056a889dfd8453e3145 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-11 07:54:13.423095 | orchestrator | | fafc524a7a1144878d6c1bc143803385 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-11 07:54:13.423107 | orchestrator | | fc7ad654b4184a1486dd109efceca093 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-11 07:54:13.423120 | orchestrator | | fe926b6d19144e6c8bfc9b41166d819f | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-11 07:54:13.423141 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-11 07:54:13.673528 | orchestrator | 2026-04-11 07:54:13.673686 | orchestrator | # Cinder 2026-04-11 07:54:13.673706 | orchestrator | 2026-04-11 07:54:13.673718 | orchestrator | + echo 2026-04-11 07:54:13.673730 | orchestrator | + echo '# Cinder' 2026-04-11 07:54:13.673741 | orchestrator | + echo 2026-04-11 07:54:13.673752 | orchestrator | + openstack volume service list 2026-04-11 07:54:16.378429 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-11 07:54:16.378562 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-11 07:54:16.378589 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-11 07:54:16.378681 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-11T07:54:11.000000 | 2026-04-11 07:54:16.378702 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-11T07:54:11.000000 | 2026-04-11 07:54:16.378721 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-11T07:54:13.000000 | 2026-04-11 07:54:16.378740 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-11T07:54:07.000000 | 2026-04-11 07:54:16.378758 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-11T07:54:11.000000 | 2026-04-11 07:54:16.378776 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-11T07:54:15.000000 | 2026-04-11 07:54:16.378795 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-11T07:54:10.000000 | 2026-04-11 07:54:16.378807 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-11T07:54:10.000000 | 2026-04-11 07:54:16.378818 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-11T07:54:15.000000 | 2026-04-11 07:54:16.378829 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-11 07:54:16.627856 | orchestrator | 2026-04-11 07:54:16.627954 | orchestrator | # Neutron 2026-04-11 07:54:16.627976 | orchestrator | 2026-04-11 07:54:16.627998 | orchestrator | + echo 2026-04-11 07:54:16.628017 | orchestrator | + echo '# Neutron' 2026-04-11 07:54:16.628037 | orchestrator | + echo 2026-04-11 07:54:16.628059 | orchestrator | + openstack network agent list 2026-04-11 07:54:19.385843 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-11 07:54:19.385950 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-11 07:54:19.385965 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-11 07:54:19.385977 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-11 07:54:19.385988 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-11 07:54:19.385999 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-11 07:54:19.386010 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-11 07:54:19.386105 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-11 07:54:19.386117 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-11 07:54:19.386158 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-11 07:54:19.386185 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-11 07:54:19.386206 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-11 07:54:19.386217 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-11 07:54:19.656824 | orchestrator | + openstack network service provider list 2026-04-11 07:54:22.371186 | orchestrator | +---------------+------+---------+ 2026-04-11 07:54:22.371313 | orchestrator | | Service Type | Name | Default | 2026-04-11 07:54:22.371338 | orchestrator | +---------------+------+---------+ 2026-04-11 07:54:22.371359 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-11 07:54:22.371379 | orchestrator | +---------------+------+---------+ 2026-04-11 07:54:22.651408 | orchestrator | 2026-04-11 07:54:22.651504 | orchestrator | # Nova 2026-04-11 07:54:22.651519 | orchestrator | 2026-04-11 07:54:22.651530 | orchestrator | + echo 2026-04-11 07:54:22.651540 | orchestrator | + echo '# Nova' 2026-04-11 07:54:22.651550 | orchestrator | + echo 2026-04-11 07:54:22.651560 | orchestrator | + openstack compute service list 2026-04-11 07:54:25.363003 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-11 07:54:25.363172 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-11 07:54:25.363191 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-11 07:54:25.363203 | orchestrator | | 54195281-6c78-4989-a690-73fe17a9ba90 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-11T07:54:24.000000 | 2026-04-11 07:54:25.363213 | orchestrator | | f5285534-40d6-4fe7-a7b2-3eceba3c1493 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-11T07:54:16.000000 | 2026-04-11 07:54:25.363224 | orchestrator | | 2fb51b3b-1ebc-4a7c-920c-aff571c4f604 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-11T07:54:23.000000 | 2026-04-11 07:54:25.363235 | orchestrator | | ed34d05d-3659-4290-9e25-0a730bd13a8e | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-11T07:54:18.000000 | 2026-04-11 07:54:25.363246 | orchestrator | | 6534626c-c978-4c8d-a5b4-89821c132613 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-11T07:54:18.000000 | 2026-04-11 07:54:25.363256 | orchestrator | | 978b10e4-dbe1-492a-9dec-5f679b0deb96 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-11T07:54:19.000000 | 2026-04-11 07:54:25.363267 | orchestrator | | bc8b9392-07f4-4e84-90af-cd2dbf51524d | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-11T07:54:17.000000 | 2026-04-11 07:54:25.363278 | orchestrator | | f9732319-bf66-47e0-a77c-16729226de0c | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-11T07:54:20.000000 | 2026-04-11 07:54:25.363288 | orchestrator | | 9ce44803-1f36-44b9-9528-726126aae883 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-11T07:54:16.000000 | 2026-04-11 07:54:25.363299 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-11 07:54:25.632067 | orchestrator | + openstack hypervisor list 2026-04-11 07:54:28.194769 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-11 07:54:28.194897 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-11 07:54:28.194924 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-11 07:54:28.194962 | orchestrator | | 8f8b3ee1-dad8-458c-8a75-048e9df21836 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-11 07:54:28.195026 | orchestrator | | c85402e3-64e6-4aa9-a0c5-fe1f4e0db351 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-11 07:54:28.195047 | orchestrator | | fb7d5519-121f-4ff1-8ec1-3f9934d7d655 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-11 07:54:28.195065 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-11 07:54:28.446896 | orchestrator | 2026-04-11 07:54:28.446996 | orchestrator | # Run OpenStack test play 2026-04-11 07:54:28.447012 | orchestrator | 2026-04-11 07:54:28.447024 | orchestrator | + echo 2026-04-11 07:54:28.447036 | orchestrator | + echo '# Run OpenStack test play' 2026-04-11 07:54:28.447048 | orchestrator | + echo 2026-04-11 07:54:28.447059 | orchestrator | + osism apply --environment openstack test 2026-04-11 07:54:29.778157 | orchestrator | 2026-04-11 07:54:29 | INFO  | Trying to run play test in environment openstack 2026-04-11 07:54:39.951927 | orchestrator | 2026-04-11 07:54:39 | INFO  | Prepare task for execution of test. 2026-04-11 07:54:40.043644 | orchestrator | 2026-04-11 07:54:40 | INFO  | Task d259a9bc-732a-492c-b027-5e0165d02546 (test) was prepared for execution. 2026-04-11 07:54:40.043746 | orchestrator | 2026-04-11 07:54:40 | INFO  | It takes a moment until task d259a9bc-732a-492c-b027-5e0165d02546 (test) has been started and output is visible here. 2026-04-11 07:57:16.776826 | orchestrator | 2026-04-11 07:57:16.776998 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-11 07:57:16.777020 | orchestrator | 2026-04-11 07:57:16.777033 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-11 07:57:16.777052 | orchestrator | Saturday 11 April 2026 07:54:45 +0000 (0:00:01.385) 0:00:01.385 ******** 2026-04-11 07:57:16.777073 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777095 | orchestrator | 2026-04-11 07:57:16.777135 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-11 07:57:16.777157 | orchestrator | Saturday 11 April 2026 07:54:51 +0000 (0:00:06.214) 0:00:07.600 ******** 2026-04-11 07:57:16.777179 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777191 | orchestrator | 2026-04-11 07:57:16.777202 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-11 07:57:16.777213 | orchestrator | Saturday 11 April 2026 07:54:56 +0000 (0:00:05.038) 0:00:12.639 ******** 2026-04-11 07:57:16.777225 | orchestrator | changed: [localhost] 2026-04-11 07:57:16.777245 | orchestrator | 2026-04-11 07:57:16.777264 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-11 07:57:16.777284 | orchestrator | Saturday 11 April 2026 07:55:05 +0000 (0:00:09.022) 0:00:21.661 ******** 2026-04-11 07:57:16.777304 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777323 | orchestrator | 2026-04-11 07:57:16.777342 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-11 07:57:16.777362 | orchestrator | Saturday 11 April 2026 07:55:10 +0000 (0:00:05.029) 0:00:26.690 ******** 2026-04-11 07:57:16.777382 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777400 | orchestrator | 2026-04-11 07:57:16.777413 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-11 07:57:16.777426 | orchestrator | Saturday 11 April 2026 07:55:15 +0000 (0:00:05.113) 0:00:31.804 ******** 2026-04-11 07:57:16.777439 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-11 07:57:16.777451 | orchestrator | ok: [localhost] => (item=member) 2026-04-11 07:57:16.777464 | orchestrator | changed: [localhost] => (item=creator) 2026-04-11 07:57:16.777476 | orchestrator | 2026-04-11 07:57:16.777489 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-11 07:57:16.777501 | orchestrator | Saturday 11 April 2026 07:55:28 +0000 (0:00:13.527) 0:00:45.332 ******** 2026-04-11 07:57:16.777514 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777526 | orchestrator | 2026-04-11 07:57:16.777539 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-11 07:57:16.777551 | orchestrator | Saturday 11 April 2026 07:55:34 +0000 (0:00:05.301) 0:00:50.634 ******** 2026-04-11 07:57:16.777588 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777600 | orchestrator | 2026-04-11 07:57:16.777613 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-11 07:57:16.777625 | orchestrator | Saturday 11 April 2026 07:55:39 +0000 (0:00:05.061) 0:00:55.695 ******** 2026-04-11 07:57:16.777636 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777646 | orchestrator | 2026-04-11 07:57:16.777689 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-11 07:57:16.777709 | orchestrator | Saturday 11 April 2026 07:55:44 +0000 (0:00:05.233) 0:01:00.929 ******** 2026-04-11 07:57:16.777728 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777747 | orchestrator | 2026-04-11 07:57:16.777765 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-11 07:57:16.777784 | orchestrator | Saturday 11 April 2026 07:55:49 +0000 (0:00:04.846) 0:01:05.776 ******** 2026-04-11 07:57:16.777802 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777820 | orchestrator | 2026-04-11 07:57:16.777837 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-11 07:57:16.777855 | orchestrator | Saturday 11 April 2026 07:55:54 +0000 (0:00:05.052) 0:01:10.829 ******** 2026-04-11 07:57:16.777873 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.777890 | orchestrator | 2026-04-11 07:57:16.777908 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-11 07:57:16.777924 | orchestrator | Saturday 11 April 2026 07:55:59 +0000 (0:00:04.973) 0:01:15.803 ******** 2026-04-11 07:57:16.777940 | orchestrator | ok: [localhost] => (item={'name': 'test-1'}) 2026-04-11 07:57:16.777958 | orchestrator | ok: [localhost] => (item={'name': 'test-2'}) 2026-04-11 07:57:16.777976 | orchestrator | ok: [localhost] => (item={'name': 'test-3'}) 2026-04-11 07:57:16.777993 | orchestrator | 2026-04-11 07:57:16.778011 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-11 07:57:16.778109 | orchestrator | Saturday 11 April 2026 07:56:12 +0000 (0:00:12.685) 0:01:28.488 ******** 2026-04-11 07:57:16.778131 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-11 07:57:16.778168 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-11 07:57:16.778187 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-11 07:57:16.778199 | orchestrator | 2026-04-11 07:57:16.778210 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-11 07:57:16.778222 | orchestrator | Saturday 11 April 2026 07:56:24 +0000 (0:00:12.871) 0:01:41.360 ******** 2026-04-11 07:57:16.778240 | orchestrator | ok: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-11 07:57:16.778257 | orchestrator | ok: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-11 07:57:16.778268 | orchestrator | ok: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-11 07:57:16.778279 | orchestrator | 2026-04-11 07:57:16.778289 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-11 07:57:16.778300 | orchestrator | 2026-04-11 07:57:16.778310 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-11 07:57:16.778321 | orchestrator | Saturday 11 April 2026 07:56:39 +0000 (0:00:14.492) 0:01:55.852 ******** 2026-04-11 07:57:16.778332 | orchestrator | ok: [localhost] 2026-04-11 07:57:16.778343 | orchestrator | 2026-04-11 07:57:16.778377 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-11 07:57:16.778389 | orchestrator | Saturday 11 April 2026 07:56:44 +0000 (0:00:04.877) 0:02:00.730 ******** 2026-04-11 07:57:16.778406 | orchestrator | skipping: [localhost] 2026-04-11 07:57:16.778425 | orchestrator | 2026-04-11 07:57:16.778443 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-11 07:57:16.778460 | orchestrator | Saturday 11 April 2026 07:56:45 +0000 (0:00:01.140) 0:02:01.871 ******** 2026-04-11 07:57:16.778496 | orchestrator | skipping: [localhost] 2026-04-11 07:57:16.778516 | orchestrator | 2026-04-11 07:57:16.778534 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-11 07:57:16.778554 | orchestrator | Saturday 11 April 2026 07:56:46 +0000 (0:00:01.152) 0:02:03.023 ******** 2026-04-11 07:57:16.778572 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-11 07:57:16.778587 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-11 07:57:16.778597 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-11 07:57:16.778608 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-11 07:57:16.778618 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-11 07:57:16.778629 | orchestrator | skipping: [localhost] 2026-04-11 07:57:16.778639 | orchestrator | 2026-04-11 07:57:16.778650 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-11 07:57:16.778694 | orchestrator | Saturday 11 April 2026 07:56:47 +0000 (0:00:01.236) 0:02:04.259 ******** 2026-04-11 07:57:16.778705 | orchestrator | skipping: [localhost] 2026-04-11 07:57:16.778715 | orchestrator | 2026-04-11 07:57:16.778726 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-11 07:57:16.778736 | orchestrator | Saturday 11 April 2026 07:56:49 +0000 (0:00:01.220) 0:02:05.480 ******** 2026-04-11 07:57:16.778747 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-11 07:57:16.778758 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-11 07:57:16.778768 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-11 07:57:16.778779 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-11 07:57:16.778789 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-11 07:57:16.778800 | orchestrator | 2026-04-11 07:57:16.778810 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-11 07:57:16.778821 | orchestrator | Saturday 11 April 2026 07:56:55 +0000 (0:00:05.921) 0:02:11.401 ******** 2026-04-11 07:57:16.778832 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-11 07:57:16.778846 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j113469401260.4279', 'results_file': '/ansible/.ansible_async/j113469401260.4279', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:57:16.778873 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j825791349527.4304', 'results_file': '/ansible/.ansible_async/j825791349527.4304', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:57:16.778885 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j932101014904.4329', 'results_file': '/ansible/.ansible_async/j932101014904.4329', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:57:16.778896 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j4581533885.4354', 'results_file': '/ansible/.ansible_async/j4581533885.4354', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:57:16.778907 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j84939606135.4379', 'results_file': '/ansible/.ansible_async/j84939606135.4379', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:57:16.778918 | orchestrator | 2026-04-11 07:57:16.778929 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-11 07:57:16.778951 | orchestrator | Saturday 11 April 2026 07:57:11 +0000 (0:00:15.968) 0:02:27.370 ******** 2026-04-11 07:57:16.778971 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-11 07:57:16.778991 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-11 07:57:16.779009 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-11 07:57:16.779027 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-11 07:57:16.779048 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-11 07:57:16.779068 | orchestrator | 2026-04-11 07:57:16.779088 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-11 07:57:16.779118 | orchestrator | Saturday 11 April 2026 07:57:16 +0000 (0:00:05.761) 0:02:33.132 ******** 2026-04-11 07:58:18.192997 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j424633125770.4457', 'results_file': '/ansible/.ansible_async/j424633125770.4457', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193131 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j427377175863.4482', 'results_file': '/ansible/.ansible_async/j427377175863.4482', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193150 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j775776710940.4507', 'results_file': '/ansible/.ansible_async/j775776710940.4507', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193162 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j964053634372.4532', 'results_file': '/ansible/.ansible_async/j964053634372.4532', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193174 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j962445448834.4557', 'results_file': '/ansible/.ansible_async/j962445448834.4557', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193186 | orchestrator | 2026-04-11 07:58:18.193198 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-11 07:58:18.193212 | orchestrator | Saturday 11 April 2026 07:57:21 +0000 (0:00:05.102) 0:02:38.235 ******** 2026-04-11 07:58:18.193222 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-11 07:58:18.193234 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-11 07:58:18.193244 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-11 07:58:18.193255 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-11 07:58:18.193266 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-11 07:58:18.193277 | orchestrator | 2026-04-11 07:58:18.193287 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-11 07:58:18.193298 | orchestrator | Saturday 11 April 2026 07:57:27 +0000 (0:00:05.805) 0:02:44.040 ******** 2026-04-11 07:58:18.193309 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-11 07:58:18.193321 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j945381218614.4621', 'results_file': '/ansible/.ansible_async/j945381218614.4621', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193333 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j158848727510.4646', 'results_file': '/ansible/.ansible_async/j158848727510.4646', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193367 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j289674024938.4672', 'results_file': '/ansible/.ansible_async/j289674024938.4672', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193379 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j408631969040.4698', 'results_file': '/ansible/.ansible_async/j408631969040.4698', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193390 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j23258891882.4724', 'results_file': '/ansible/.ansible_async/j23258891882.4724', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-11 07:58:18.193401 | orchestrator | 2026-04-11 07:58:18.193412 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-11 07:58:18.193423 | orchestrator | Saturday 11 April 2026 07:57:39 +0000 (0:00:11.575) 0:02:55.616 ******** 2026-04-11 07:58:18.193434 | orchestrator | ok: [localhost] 2026-04-11 07:58:18.193446 | orchestrator | 2026-04-11 07:58:18.193457 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-11 07:58:18.193471 | orchestrator | Saturday 11 April 2026 07:57:44 +0000 (0:00:05.062) 0:03:00.678 ******** 2026-04-11 07:58:18.193492 | orchestrator | ok: [localhost] 2026-04-11 07:58:18.193514 | orchestrator | 2026-04-11 07:58:18.193534 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-11 07:58:18.193566 | orchestrator | Saturday 11 April 2026 07:57:50 +0000 (0:00:06.123) 0:03:06.802 ******** 2026-04-11 07:58:18.193580 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-11 07:58:18.193593 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-11 07:58:18.193607 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-11 07:58:18.193625 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-11 07:58:18.193638 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-11 07:58:18.193651 | orchestrator | 2026-04-11 07:58:18.193688 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-11 07:58:18.193702 | orchestrator | Saturday 11 April 2026 07:58:16 +0000 (0:00:25.905) 0:03:32.708 ******** 2026-04-11 07:58:18.193714 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-11 07:58:18.193727 | orchestrator |  "msg": "test: 192.168.112.166" 2026-04-11 07:58:18.193739 | orchestrator | } 2026-04-11 07:58:18.193752 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-11 07:58:18.193764 | orchestrator |  "msg": "test-1: 192.168.112.108" 2026-04-11 07:58:18.193776 | orchestrator | } 2026-04-11 07:58:18.193789 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-11 07:58:18.193801 | orchestrator |  "msg": "test-2: 192.168.112.188" 2026-04-11 07:58:18.193813 | orchestrator | } 2026-04-11 07:58:18.193825 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-11 07:58:18.193837 | orchestrator |  "msg": "test-3: 192.168.112.136" 2026-04-11 07:58:18.193849 | orchestrator | } 2026-04-11 07:58:18.193861 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-11 07:58:18.193873 | orchestrator |  "msg": "test-4: 192.168.112.146" 2026-04-11 07:58:18.193886 | orchestrator | } 2026-04-11 07:58:18.193897 | orchestrator | 2026-04-11 07:58:18.193907 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-11 07:58:18.193919 | orchestrator | localhost : ok=26  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-11 07:58:18.193942 | orchestrator | 2026-04-11 07:58:18.193953 | orchestrator | 2026-04-11 07:58:18.193964 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-11 07:58:18.193975 | orchestrator | Saturday 11 April 2026 07:58:17 +0000 (0:00:01.579) 0:03:34.288 ******** 2026-04-11 07:58:18.193986 | orchestrator | =============================================================================== 2026-04-11 07:58:18.193997 | orchestrator | Create floating ip addresses ------------------------------------------- 25.90s 2026-04-11 07:58:18.194007 | orchestrator | Wait for instance creation to complete --------------------------------- 15.97s 2026-04-11 07:58:18.194084 | orchestrator | Create test routers ---------------------------------------------------- 14.49s 2026-04-11 07:58:18.194107 | orchestrator | Add member roles to user test ------------------------------------------ 13.53s 2026-04-11 07:58:18.194125 | orchestrator | Create test subnets ---------------------------------------------------- 12.87s 2026-04-11 07:58:18.194145 | orchestrator | Create test networks --------------------------------------------------- 12.69s 2026-04-11 07:58:18.194163 | orchestrator | Wait for tags to be added ---------------------------------------------- 11.57s 2026-04-11 07:58:18.194180 | orchestrator | Add manager role to user test-admin ------------------------------------- 9.02s 2026-04-11 07:58:18.194191 | orchestrator | Create test domain ------------------------------------------------------ 6.21s 2026-04-11 07:58:18.194201 | orchestrator | Attach test volume ------------------------------------------------------ 6.13s 2026-04-11 07:58:18.194212 | orchestrator | Create test instances --------------------------------------------------- 5.92s 2026-04-11 07:58:18.194222 | orchestrator | Add tag to instances ---------------------------------------------------- 5.81s 2026-04-11 07:58:18.194233 | orchestrator | Add metadata to instances ----------------------------------------------- 5.76s 2026-04-11 07:58:18.194243 | orchestrator | Create test server group ------------------------------------------------ 5.30s 2026-04-11 07:58:18.194254 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.23s 2026-04-11 07:58:18.194264 | orchestrator | Create test user -------------------------------------------------------- 5.11s 2026-04-11 07:58:18.194275 | orchestrator | Wait for metadata to be added ------------------------------------------- 5.10s 2026-04-11 07:58:18.194285 | orchestrator | Create test volume ------------------------------------------------------ 5.06s 2026-04-11 07:58:18.194296 | orchestrator | Create ssh security group ----------------------------------------------- 5.06s 2026-04-11 07:58:18.194306 | orchestrator | Add rule to icmp security group ----------------------------------------- 5.05s 2026-04-11 07:58:18.416583 | orchestrator | + server_list 2026-04-11 07:58:18.416744 | orchestrator | + openstack --os-cloud test server list 2026-04-11 07:58:22.267497 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-11 07:58:22.267604 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-11 07:58:22.267619 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-11 07:58:22.267631 | orchestrator | | 3b8ec0b1-a556-45d2-80eb-d40412651344 | test-4 | ACTIVE | test-3=192.168.112.146, 192.168.202.244 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 07:58:22.267642 | orchestrator | | c4273570-b6e3-4b24-b5a3-11f3111313ad | test-3 | ACTIVE | test-2=192.168.112.136, 192.168.201.190 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 07:58:22.267653 | orchestrator | | cf244b49-ecc5-446b-858f-201bcece8db1 | test-2 | ACTIVE | test-2=192.168.112.188, 192.168.201.146 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 07:58:22.267757 | orchestrator | | 9e9e3efb-7bd1-4de6-a196-833998bea147 | test | ACTIVE | test-1=192.168.112.166, 192.168.200.215 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 07:58:22.267794 | orchestrator | | a2fdb24a-1036-4d5a-94e9-f935448d3ed1 | test-1 | ACTIVE | test-1=192.168.112.108, 192.168.200.75 | N/A (booted from volume) | SCS-1L-1 | 2026-04-11 07:58:22.267805 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-11 07:58:22.534428 | orchestrator | + openstack --os-cloud test server show test 2026-04-11 07:58:25.692896 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:25.693029 | orchestrator | | Field | Value | 2026-04-11 07:58:25.693063 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:25.693090 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 07:58:25.693111 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 07:58:25.693131 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 07:58:25.693153 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-11 07:58:25.693173 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 07:58:25.693195 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 07:58:25.693266 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 07:58:25.693281 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 07:58:25.693292 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 07:58:25.693304 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 07:58:25.693315 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 07:58:25.693325 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 07:58:25.693336 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 07:58:25.693347 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 07:58:25.693358 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 07:58:25.693369 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:34.000000 | 2026-04-11 07:58:25.693400 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 07:58:25.693414 | orchestrator | | accessIPv4 | | 2026-04-11 07:58:25.693427 | orchestrator | | accessIPv6 | | 2026-04-11 07:58:25.693442 | orchestrator | | addresses | test-1=192.168.112.166, 192.168.200.215 | 2026-04-11 07:58:25.693455 | orchestrator | | config_drive | | 2026-04-11 07:58:25.693473 | orchestrator | | created | 2026-04-11T04:17:07Z | 2026-04-11 07:58:25.693497 | orchestrator | | description | None | 2026-04-11 07:58:25.693525 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 07:58:25.693543 | orchestrator | | hostId | 2771b4cb1e58059f56d80dcaed3b9287c9915a466d4201a362df41a3 | 2026-04-11 07:58:25.693574 | orchestrator | | host_status | None | 2026-04-11 07:58:25.693613 | orchestrator | | id | 9e9e3efb-7bd1-4de6-a196-833998bea147 | 2026-04-11 07:58:25.693633 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 07:58:25.693653 | orchestrator | | key_name | test | 2026-04-11 07:58:25.693710 | orchestrator | | locked | False | 2026-04-11 07:58:25.693728 | orchestrator | | locked_reason | None | 2026-04-11 07:58:25.693747 | orchestrator | | name | test | 2026-04-11 07:58:25.693765 | orchestrator | | pinned_availability_zone | None | 2026-04-11 07:58:25.693783 | orchestrator | | progress | 0 | 2026-04-11 07:58:25.693818 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 07:58:25.693838 | orchestrator | | properties | hostname='test' | 2026-04-11 07:58:25.693871 | orchestrator | | security_groups | name='icmp' | 2026-04-11 07:58:25.694366 | orchestrator | | | name='ssh' | 2026-04-11 07:58:25.694389 | orchestrator | | server_groups | None | 2026-04-11 07:58:25.694400 | orchestrator | | status | ACTIVE | 2026-04-11 07:58:25.694412 | orchestrator | | tags | test | 2026-04-11 07:58:25.694423 | orchestrator | | trusted_image_certificates | None | 2026-04-11 07:58:25.694434 | orchestrator | | updated | 2026-04-11T07:57:17Z | 2026-04-11 07:58:25.694458 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 07:58:25.694470 | orchestrator | | volumes_attached | delete_on_termination='True', id='f20dc248-0394-4260-8d30-9797cea79271' | 2026-04-11 07:58:25.694481 | orchestrator | | | delete_on_termination='False', id='334b69c3-7ca1-480d-bfc8-8e7b77f02a1d' | 2026-04-11 07:58:25.697278 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:25.958409 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-11 07:58:28.957011 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:28.957146 | orchestrator | | Field | Value | 2026-04-11 07:58:28.957172 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:28.957191 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 07:58:28.957210 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 07:58:28.957263 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 07:58:28.957302 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-11 07:58:28.957321 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 07:58:28.957340 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 07:58:28.957386 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 07:58:28.957407 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 07:58:28.957428 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 07:58:28.957449 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 07:58:28.957467 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 07:58:28.957485 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 07:58:28.957523 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 07:58:28.957543 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 07:58:28.957562 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 07:58:28.957581 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:34.000000 | 2026-04-11 07:58:28.957612 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 07:58:28.957632 | orchestrator | | accessIPv4 | | 2026-04-11 07:58:28.957650 | orchestrator | | accessIPv6 | | 2026-04-11 07:58:28.957698 | orchestrator | | addresses | test-1=192.168.112.108, 192.168.200.75 | 2026-04-11 07:58:28.957720 | orchestrator | | config_drive | | 2026-04-11 07:58:28.957748 | orchestrator | | created | 2026-04-11T04:17:07Z | 2026-04-11 07:58:28.957776 | orchestrator | | description | None | 2026-04-11 07:58:28.957796 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 07:58:28.957815 | orchestrator | | hostId | 2771b4cb1e58059f56d80dcaed3b9287c9915a466d4201a362df41a3 | 2026-04-11 07:58:28.957835 | orchestrator | | host_status | None | 2026-04-11 07:58:28.957869 | orchestrator | | id | a2fdb24a-1036-4d5a-94e9-f935448d3ed1 | 2026-04-11 07:58:28.957889 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 07:58:28.957901 | orchestrator | | key_name | test | 2026-04-11 07:58:28.957912 | orchestrator | | locked | False | 2026-04-11 07:58:28.957932 | orchestrator | | locked_reason | None | 2026-04-11 07:58:28.957943 | orchestrator | | name | test-1 | 2026-04-11 07:58:28.957959 | orchestrator | | pinned_availability_zone | None | 2026-04-11 07:58:28.957970 | orchestrator | | progress | 0 | 2026-04-11 07:58:28.957981 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 07:58:28.957992 | orchestrator | | properties | hostname='test-1' | 2026-04-11 07:58:28.958011 | orchestrator | | security_groups | name='icmp' | 2026-04-11 07:58:28.958083 | orchestrator | | | name='ssh' | 2026-04-11 07:58:28.958096 | orchestrator | | server_groups | None | 2026-04-11 07:58:28.958114 | orchestrator | | status | ACTIVE | 2026-04-11 07:58:28.958125 | orchestrator | | tags | test | 2026-04-11 07:58:28.958136 | orchestrator | | trusted_image_certificates | None | 2026-04-11 07:58:28.958152 | orchestrator | | updated | 2026-04-11T07:57:17Z | 2026-04-11 07:58:28.958163 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 07:58:28.958174 | orchestrator | | volumes_attached | delete_on_termination='True', id='d66ffbb8-185a-4949-b4f5-74ecaf7800f5' | 2026-04-11 07:58:28.961057 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:29.228077 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-11 07:58:32.352571 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:32.352715 | orchestrator | | Field | Value | 2026-04-11 07:58:32.352745 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:32.352754 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 07:58:32.352761 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 07:58:32.352768 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 07:58:32.352776 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-11 07:58:32.352782 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 07:58:32.352788 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 07:58:32.352809 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 07:58:32.352816 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 07:58:32.352823 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 07:58:32.352838 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 07:58:32.352845 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 07:58:32.352851 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 07:58:32.352919 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 07:58:32.352933 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 07:58:32.352939 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 07:58:32.352946 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:35.000000 | 2026-04-11 07:58:32.352959 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 07:58:32.352966 | orchestrator | | accessIPv4 | | 2026-04-11 07:58:32.352978 | orchestrator | | accessIPv6 | | 2026-04-11 07:58:32.352985 | orchestrator | | addresses | test-2=192.168.112.188, 192.168.201.146 | 2026-04-11 07:58:32.352991 | orchestrator | | config_drive | | 2026-04-11 07:58:32.352998 | orchestrator | | created | 2026-04-11T04:17:08Z | 2026-04-11 07:58:32.353005 | orchestrator | | description | None | 2026-04-11 07:58:32.353015 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 07:58:32.353021 | orchestrator | | hostId | 511cad18ea7291a59af43418ed7a0e2d21e972767f13bde6f4d27ac9 | 2026-04-11 07:58:32.353028 | orchestrator | | host_status | None | 2026-04-11 07:58:32.353039 | orchestrator | | id | cf244b49-ecc5-446b-858f-201bcece8db1 | 2026-04-11 07:58:32.353050 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 07:58:32.353057 | orchestrator | | key_name | test | 2026-04-11 07:58:32.353064 | orchestrator | | locked | False | 2026-04-11 07:58:32.353071 | orchestrator | | locked_reason | None | 2026-04-11 07:58:32.353077 | orchestrator | | name | test-2 | 2026-04-11 07:58:32.353084 | orchestrator | | pinned_availability_zone | None | 2026-04-11 07:58:32.353094 | orchestrator | | progress | 0 | 2026-04-11 07:58:32.353100 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 07:58:32.353107 | orchestrator | | properties | hostname='test-2' | 2026-04-11 07:58:32.353122 | orchestrator | | security_groups | name='icmp' | 2026-04-11 07:58:32.353129 | orchestrator | | | name='ssh' | 2026-04-11 07:58:32.353136 | orchestrator | | server_groups | None | 2026-04-11 07:58:32.353143 | orchestrator | | status | ACTIVE | 2026-04-11 07:58:32.353150 | orchestrator | | tags | test | 2026-04-11 07:58:32.353156 | orchestrator | | trusted_image_certificates | None | 2026-04-11 07:58:32.353166 | orchestrator | | updated | 2026-04-11T07:57:18Z | 2026-04-11 07:58:32.353173 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 07:58:32.353180 | orchestrator | | volumes_attached | delete_on_termination='True', id='6affec2b-cc2e-4873-add9-8d3ff5b665a0' | 2026-04-11 07:58:32.357522 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:32.624947 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-11 07:58:35.630696 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:35.630805 | orchestrator | | Field | Value | 2026-04-11 07:58:35.630821 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:35.630833 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 07:58:35.630845 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 07:58:35.630856 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 07:58:35.630884 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-11 07:58:35.630897 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 07:58:35.630908 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 07:58:35.630962 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 07:58:35.630975 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 07:58:35.630986 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 07:58:35.630997 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 07:58:35.631008 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 07:58:35.631019 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 07:58:35.631031 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 07:58:35.631073 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 07:58:35.631085 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 07:58:35.631104 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:36.000000 | 2026-04-11 07:58:35.631122 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 07:58:35.631134 | orchestrator | | accessIPv4 | | 2026-04-11 07:58:35.631145 | orchestrator | | accessIPv6 | | 2026-04-11 07:58:35.631156 | orchestrator | | addresses | test-2=192.168.112.136, 192.168.201.190 | 2026-04-11 07:58:35.631167 | orchestrator | | config_drive | | 2026-04-11 07:58:35.631178 | orchestrator | | created | 2026-04-11T04:17:09Z | 2026-04-11 07:58:35.631188 | orchestrator | | description | None | 2026-04-11 07:58:35.631204 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 07:58:35.631230 | orchestrator | | hostId | 511cad18ea7291a59af43418ed7a0e2d21e972767f13bde6f4d27ac9 | 2026-04-11 07:58:35.631242 | orchestrator | | host_status | None | 2026-04-11 07:58:35.631261 | orchestrator | | id | c4273570-b6e3-4b24-b5a3-11f3111313ad | 2026-04-11 07:58:35.631273 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 07:58:35.631284 | orchestrator | | key_name | test | 2026-04-11 07:58:35.631295 | orchestrator | | locked | False | 2026-04-11 07:58:35.631306 | orchestrator | | locked_reason | None | 2026-04-11 07:58:35.631317 | orchestrator | | name | test-3 | 2026-04-11 07:58:35.631328 | orchestrator | | pinned_availability_zone | None | 2026-04-11 07:58:35.631351 | orchestrator | | progress | 0 | 2026-04-11 07:58:35.631363 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 07:58:35.631374 | orchestrator | | properties | hostname='test-3' | 2026-04-11 07:58:35.631392 | orchestrator | | security_groups | name='icmp' | 2026-04-11 07:58:35.631403 | orchestrator | | | name='ssh' | 2026-04-11 07:58:35.631414 | orchestrator | | server_groups | None | 2026-04-11 07:58:35.631425 | orchestrator | | status | ACTIVE | 2026-04-11 07:58:35.631436 | orchestrator | | tags | test | 2026-04-11 07:58:35.631447 | orchestrator | | trusted_image_certificates | None | 2026-04-11 07:58:35.631458 | orchestrator | | updated | 2026-04-11T07:57:19Z | 2026-04-11 07:58:35.631476 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 07:58:35.631487 | orchestrator | | volumes_attached | delete_on_termination='True', id='4a1527a7-4e68-4867-a03a-ee81f91d4889' | 2026-04-11 07:58:35.634804 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:35.907056 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-11 07:58:38.820565 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:38.820660 | orchestrator | | Field | Value | 2026-04-11 07:58:38.820704 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:38.820714 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-11 07:58:38.820722 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-11 07:58:38.820730 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-11 07:58:38.820755 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-11 07:58:38.820768 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-11 07:58:38.820776 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-11 07:58:38.820801 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-11 07:58:38.820810 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-11 07:58:38.820818 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-11 07:58:38.820826 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-11 07:58:38.820834 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-11 07:58:38.820842 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-11 07:58:38.820856 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-11 07:58:38.820868 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-11 07:58:38.820877 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-11 07:58:38.820885 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-11T04:17:38.000000 | 2026-04-11 07:58:38.820899 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-11 07:58:38.820907 | orchestrator | | accessIPv4 | | 2026-04-11 07:58:38.820915 | orchestrator | | accessIPv6 | | 2026-04-11 07:58:38.820923 | orchestrator | | addresses | test-3=192.168.112.146, 192.168.202.244 | 2026-04-11 07:58:38.820931 | orchestrator | | config_drive | | 2026-04-11 07:58:38.820945 | orchestrator | | created | 2026-04-11T04:17:12Z | 2026-04-11 07:58:38.820954 | orchestrator | | description | None | 2026-04-11 07:58:38.820965 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-11 07:58:38.820974 | orchestrator | | hostId | a016dd2d1f272dd2c23f3e85b9255e71f0f248d00bb8edd8d4b20d00 | 2026-04-11 07:58:38.820982 | orchestrator | | host_status | None | 2026-04-11 07:58:38.820995 | orchestrator | | id | 3b8ec0b1-a556-45d2-80eb-d40412651344 | 2026-04-11 07:58:38.821003 | orchestrator | | image | N/A (booted from volume) | 2026-04-11 07:58:38.821012 | orchestrator | | key_name | test | 2026-04-11 07:58:38.821019 | orchestrator | | locked | False | 2026-04-11 07:58:38.821027 | orchestrator | | locked_reason | None | 2026-04-11 07:58:38.821041 | orchestrator | | name | test-4 | 2026-04-11 07:58:38.821050 | orchestrator | | pinned_availability_zone | None | 2026-04-11 07:58:38.821064 | orchestrator | | progress | 0 | 2026-04-11 07:58:38.821073 | orchestrator | | project_id | 7fefb91f0b6142afa71e9a650608bd96 | 2026-04-11 07:58:38.821083 | orchestrator | | properties | hostname='test-4' | 2026-04-11 07:58:38.821098 | orchestrator | | security_groups | name='icmp' | 2026-04-11 07:58:38.821107 | orchestrator | | | name='ssh' | 2026-04-11 07:58:38.821117 | orchestrator | | server_groups | None | 2026-04-11 07:58:38.821126 | orchestrator | | status | ACTIVE | 2026-04-11 07:58:38.821141 | orchestrator | | tags | test | 2026-04-11 07:58:38.821150 | orchestrator | | trusted_image_certificates | None | 2026-04-11 07:58:38.821160 | orchestrator | | updated | 2026-04-11T07:57:20Z | 2026-04-11 07:58:38.821172 | orchestrator | | user_id | 4634a46a1c28429e8586f572d2ed1194 | 2026-04-11 07:58:38.821182 | orchestrator | | volumes_attached | delete_on_termination='True', id='4448a5ff-2f01-40e7-92ec-27aa7ae35dd0' | 2026-04-11 07:58:38.824571 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-11 07:58:39.073953 | orchestrator | + server_ping 2026-04-11 07:58:39.074399 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-11 07:58:39.074806 | orchestrator | ++ tr -d '\r' 2026-04-11 07:58:41.882345 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 07:58:41.882455 | orchestrator | + ping -c3 192.168.112.146 2026-04-11 07:58:41.899907 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-04-11 07:58:41.900004 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=9.62 ms 2026-04-11 07:58:42.894917 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=2.76 ms 2026-04-11 07:58:43.896276 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=2.15 ms 2026-04-11 07:58:43.896373 | orchestrator | 2026-04-11 07:58:43.896389 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-04-11 07:58:43.896402 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-11 07:58:43.896413 | orchestrator | rtt min/avg/max/mdev = 2.149/4.844/9.622/3.387 ms 2026-04-11 07:58:43.897105 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 07:58:43.897222 | orchestrator | + ping -c3 192.168.112.188 2026-04-11 07:58:43.910969 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2026-04-11 07:58:43.911019 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=9.35 ms 2026-04-11 07:58:44.906218 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.50 ms 2026-04-11 07:58:45.906810 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.95 ms 2026-04-11 07:58:45.906913 | orchestrator | 2026-04-11 07:58:45.906931 | orchestrator | --- 192.168.112.188 ping statistics --- 2026-04-11 07:58:45.906944 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-11 07:58:45.906956 | orchestrator | rtt min/avg/max/mdev = 1.954/4.603/9.354/3.366 ms 2026-04-11 07:58:45.906980 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 07:58:45.906993 | orchestrator | + ping -c3 192.168.112.108 2026-04-11 07:58:45.920614 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-04-11 07:58:45.920721 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=9.13 ms 2026-04-11 07:58:46.914990 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.54 ms 2026-04-11 07:58:47.917126 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.94 ms 2026-04-11 07:58:47.917225 | orchestrator | 2026-04-11 07:58:47.917241 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-04-11 07:58:47.917254 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-11 07:58:47.917265 | orchestrator | rtt min/avg/max/mdev = 1.941/4.537/9.134/3.259 ms 2026-04-11 07:58:47.917277 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 07:58:47.917288 | orchestrator | + ping -c3 192.168.112.136 2026-04-11 07:58:47.929197 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2026-04-11 07:58:47.929264 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=7.90 ms 2026-04-11 07:58:48.924995 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.43 ms 2026-04-11 07:58:49.926336 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.79 ms 2026-04-11 07:58:49.926450 | orchestrator | 2026-04-11 07:58:49.926465 | orchestrator | --- 192.168.112.136 ping statistics --- 2026-04-11 07:58:49.926478 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-11 07:58:49.926489 | orchestrator | rtt min/avg/max/mdev = 1.790/4.039/7.902/2.743 ms 2026-04-11 07:58:49.926869 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-11 07:58:49.926897 | orchestrator | + ping -c3 192.168.112.166 2026-04-11 07:58:49.937126 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-04-11 07:58:49.937190 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=5.57 ms 2026-04-11 07:58:50.935723 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.06 ms 2026-04-11 07:58:51.937274 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.67 ms 2026-04-11 07:58:51.937375 | orchestrator | 2026-04-11 07:58:51.937391 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-04-11 07:58:51.937403 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-11 07:58:51.937415 | orchestrator | rtt min/avg/max/mdev = 1.667/3.098/5.566/1.752 ms 2026-04-11 07:58:51.937426 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-11 07:58:52.050616 | orchestrator | ok: Runtime: 0:11:39.195959 2026-04-11 07:58:52.088126 | 2026-04-11 07:58:52.088247 | PLAY RECAP 2026-04-11 07:58:52.088307 | orchestrator | ok: 32 changed: 13 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-11 07:58:52.088332 | 2026-04-11 07:58:52.382597 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-04-11 07:58:52.388695 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-11 07:58:53.210315 | 2026-04-11 07:58:53.210498 | PLAY [Post output play] 2026-04-11 07:58:53.229058 | 2026-04-11 07:58:53.229236 | LOOP [stage-output : Register sources] 2026-04-11 07:58:53.301823 | 2026-04-11 07:58:53.302147 | TASK [stage-output : Check sudo] 2026-04-11 07:58:54.163007 | orchestrator | sudo: a password is required 2026-04-11 07:58:54.340814 | orchestrator | ok: Runtime: 0:00:00.015239 2026-04-11 07:58:54.357638 | 2026-04-11 07:58:54.357853 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-11 07:58:54.397071 | 2026-04-11 07:58:54.397349 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-11 07:58:54.457414 | orchestrator | ok 2026-04-11 07:58:54.466956 | 2026-04-11 07:58:54.467096 | LOOP [stage-output : Ensure target folders exist] 2026-04-11 07:58:54.922044 | orchestrator | ok: "docs" 2026-04-11 07:58:54.922292 | 2026-04-11 07:58:55.168943 | orchestrator | ok: "artifacts" 2026-04-11 07:58:55.428785 | orchestrator | ok: "logs" 2026-04-11 07:58:55.444346 | 2026-04-11 07:58:55.444600 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-11 07:58:55.477215 | 2026-04-11 07:58:55.477405 | TASK [stage-output : Make all log files readable] 2026-04-11 07:58:55.761492 | orchestrator | ok 2026-04-11 07:58:55.769312 | 2026-04-11 07:58:55.769433 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-11 07:58:55.794641 | orchestrator | skipping: Conditional result was False 2026-04-11 07:58:55.811314 | 2026-04-11 07:58:55.811465 | TASK [stage-output : Discover log files for compression] 2026-04-11 07:58:55.827299 | orchestrator | skipping: Conditional result was False 2026-04-11 07:58:55.837526 | 2026-04-11 07:58:55.837692 | LOOP [stage-output : Archive everything from logs] 2026-04-11 07:58:55.883273 | 2026-04-11 07:58:55.883468 | PLAY [Post cleanup play] 2026-04-11 07:58:55.892027 | 2026-04-11 07:58:55.892135 | TASK [Set cloud fact (Zuul deployment)] 2026-04-11 07:58:55.960115 | orchestrator | ok 2026-04-11 07:58:55.971466 | 2026-04-11 07:58:55.971600 | TASK [Set cloud fact (local deployment)] 2026-04-11 07:58:55.998158 | orchestrator | skipping: Conditional result was False 2026-04-11 07:58:56.014691 | 2026-04-11 07:58:56.014927 | TASK [Clean the cloud environment] 2026-04-11 07:58:56.594317 | orchestrator | 2026-04-11 07:58:56 - clean up servers 2026-04-11 07:58:57.351175 | orchestrator | 2026-04-11 07:58:57 - testbed-manager 2026-04-11 07:58:57.432198 | orchestrator | 2026-04-11 07:58:57 - testbed-node-1 2026-04-11 07:58:57.521217 | orchestrator | 2026-04-11 07:58:57 - testbed-node-3 2026-04-11 07:58:57.606144 | orchestrator | 2026-04-11 07:58:57 - testbed-node-0 2026-04-11 07:58:57.700510 | orchestrator | 2026-04-11 07:58:57 - testbed-node-2 2026-04-11 07:58:57.792916 | orchestrator | 2026-04-11 07:58:57 - testbed-node-5 2026-04-11 07:58:57.886172 | orchestrator | 2026-04-11 07:58:57 - testbed-node-4 2026-04-11 07:58:57.979198 | orchestrator | 2026-04-11 07:58:57 - clean up keypairs 2026-04-11 07:58:57.998382 | orchestrator | 2026-04-11 07:58:57 - testbed 2026-04-11 07:58:58.023721 | orchestrator | 2026-04-11 07:58:58 - wait for servers to be gone 2026-04-11 07:59:06.759116 | orchestrator | 2026-04-11 07:59:06 - clean up ports 2026-04-11 07:59:06.955256 | orchestrator | 2026-04-11 07:59:06 - 19b1ba75-73a0-4a7f-b3d4-61abc99d0b85 2026-04-11 07:59:07.242493 | orchestrator | 2026-04-11 07:59:07 - 4ffc02fe-d386-4546-b34a-88c1519f38d5 2026-04-11 07:59:07.550810 | orchestrator | 2026-04-11 07:59:07 - 5ea00f8b-cc69-45ad-a661-1ea9f1c5b2d4 2026-04-11 07:59:07.963080 | orchestrator | 2026-04-11 07:59:07 - 8d9ba641-9a87-421f-a3e8-7ab9f3f0c5d6 2026-04-11 07:59:08.193788 | orchestrator | 2026-04-11 07:59:08 - 92ad4902-83b2-4a75-a847-72461faab5c9 2026-04-11 07:59:08.405253 | orchestrator | 2026-04-11 07:59:08 - df7bc680-6d48-442f-9fed-4d82823cc7b2 2026-04-11 07:59:08.604732 | orchestrator | 2026-04-11 07:59:08 - e69386be-5169-4567-bd54-71e3f9f1ab07 2026-04-11 07:59:08.829037 | orchestrator | 2026-04-11 07:59:08 - clean up volumes 2026-04-11 07:59:08.966963 | orchestrator | 2026-04-11 07:59:08 - testbed-volume-3-node-base 2026-04-11 07:59:09.009008 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-2-node-base 2026-04-11 07:59:09.054357 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-5-node-base 2026-04-11 07:59:09.103060 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-4-node-base 2026-04-11 07:59:09.143502 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-1-node-base 2026-04-11 07:59:09.185917 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-0-node-base 2026-04-11 07:59:09.227797 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-4-node-4 2026-04-11 07:59:09.269139 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-manager-base 2026-04-11 07:59:09.314240 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-3-node-3 2026-04-11 07:59:09.357016 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-0-node-3 2026-04-11 07:59:09.397315 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-1-node-4 2026-04-11 07:59:09.442903 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-2-node-5 2026-04-11 07:59:09.485813 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-8-node-5 2026-04-11 07:59:09.526153 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-5-node-5 2026-04-11 07:59:09.569058 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-6-node-3 2026-04-11 07:59:09.608395 | orchestrator | 2026-04-11 07:59:09 - testbed-volume-7-node-4 2026-04-11 07:59:09.653464 | orchestrator | 2026-04-11 07:59:09 - disconnect routers 2026-04-11 07:59:09.785924 | orchestrator | 2026-04-11 07:59:09 - testbed 2026-04-11 07:59:10.862008 | orchestrator | 2026-04-11 07:59:10 - clean up subnets 2026-04-11 07:59:10.915345 | orchestrator | 2026-04-11 07:59:10 - subnet-testbed-management 2026-04-11 07:59:11.100267 | orchestrator | 2026-04-11 07:59:11 - clean up networks 2026-04-11 07:59:11.294930 | orchestrator | 2026-04-11 07:59:11 - net-testbed-management 2026-04-11 07:59:11.610317 | orchestrator | 2026-04-11 07:59:11 - clean up security groups 2026-04-11 07:59:11.649038 | orchestrator | 2026-04-11 07:59:11 - testbed-node 2026-04-11 07:59:11.759114 | orchestrator | 2026-04-11 07:59:11 - testbed-management 2026-04-11 07:59:11.886383 | orchestrator | 2026-04-11 07:59:11 - clean up floating ips 2026-04-11 07:59:11.921255 | orchestrator | 2026-04-11 07:59:11 - 81.163.192.48 2026-04-11 07:59:12.913007 | orchestrator | 2026-04-11 07:59:12 - clean up routers 2026-04-11 07:59:12.978569 | orchestrator | 2026-04-11 07:59:12 - testbed 2026-04-11 07:59:14.074354 | orchestrator | ok: Runtime: 0:00:17.477338 2026-04-11 07:59:14.079255 | 2026-04-11 07:59:14.079432 | PLAY RECAP 2026-04-11 07:59:14.079568 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-11 07:59:14.079654 | 2026-04-11 07:59:14.233968 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-11 07:59:14.235021 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-11 07:59:15.003100 | 2026-04-11 07:59:15.003277 | PLAY [Cleanup play] 2026-04-11 07:59:15.020161 | 2026-04-11 07:59:15.020318 | TASK [Set cloud fact (Zuul deployment)] 2026-04-11 07:59:15.079607 | orchestrator | ok 2026-04-11 07:59:15.089531 | 2026-04-11 07:59:15.089707 | TASK [Set cloud fact (local deployment)] 2026-04-11 07:59:15.114116 | orchestrator | skipping: Conditional result was False 2026-04-11 07:59:15.126324 | 2026-04-11 07:59:15.126474 | TASK [Clean the cloud environment] 2026-04-11 07:59:16.263585 | orchestrator | 2026-04-11 07:59:16 - clean up servers 2026-04-11 07:59:16.740183 | orchestrator | 2026-04-11 07:59:16 - clean up keypairs 2026-04-11 07:59:16.757146 | orchestrator | 2026-04-11 07:59:16 - wait for servers to be gone 2026-04-11 07:59:16.797719 | orchestrator | 2026-04-11 07:59:16 - clean up ports 2026-04-11 07:59:16.889067 | orchestrator | 2026-04-11 07:59:16 - clean up volumes 2026-04-11 07:59:16.972139 | orchestrator | 2026-04-11 07:59:16 - disconnect routers 2026-04-11 07:59:17.003651 | orchestrator | 2026-04-11 07:59:17 - clean up subnets 2026-04-11 07:59:17.027989 | orchestrator | 2026-04-11 07:59:17 - clean up networks 2026-04-11 07:59:17.215571 | orchestrator | 2026-04-11 07:59:17 - clean up security groups 2026-04-11 07:59:17.249402 | orchestrator | 2026-04-11 07:59:17 - clean up floating ips 2026-04-11 07:59:17.273876 | orchestrator | 2026-04-11 07:59:17 - clean up routers 2026-04-11 07:59:17.663030 | orchestrator | ok: Runtime: 0:00:01.411574 2026-04-11 07:59:17.666897 | 2026-04-11 07:59:17.667069 | PLAY RECAP 2026-04-11 07:59:17.667211 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-11 07:59:17.667282 | 2026-04-11 07:59:17.793457 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-11 07:59:17.794701 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-11 07:59:18.546398 | 2026-04-11 07:59:18.546575 | PLAY [Base post-fetch] 2026-04-11 07:59:18.562586 | 2026-04-11 07:59:18.562814 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-11 07:59:18.619397 | orchestrator | skipping: Conditional result was False 2026-04-11 07:59:18.634016 | 2026-04-11 07:59:18.634236 | TASK [fetch-output : Set log path for single node] 2026-04-11 07:59:18.683445 | orchestrator | ok 2026-04-11 07:59:18.692486 | 2026-04-11 07:59:18.692655 | LOOP [fetch-output : Ensure local output dirs] 2026-04-11 07:59:19.183315 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/85a52db06bb14d3cb1db1d0bd460f0db/work/logs" 2026-04-11 07:59:19.459259 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/85a52db06bb14d3cb1db1d0bd460f0db/work/artifacts" 2026-04-11 07:59:19.727993 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/85a52db06bb14d3cb1db1d0bd460f0db/work/docs" 2026-04-11 07:59:19.754055 | 2026-04-11 07:59:19.754246 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-11 07:59:20.724344 | orchestrator | changed: .d..t...... ./ 2026-04-11 07:59:20.724714 | orchestrator | changed: All items complete 2026-04-11 07:59:20.724798 | 2026-04-11 07:59:21.473027 | orchestrator | changed: .d..t...... ./ 2026-04-11 07:59:22.235856 | orchestrator | changed: .d..t...... ./ 2026-04-11 07:59:22.267010 | 2026-04-11 07:59:22.267171 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-11 07:59:22.300839 | orchestrator | skipping: Conditional result was False 2026-04-11 07:59:22.303323 | orchestrator | skipping: Conditional result was False 2026-04-11 07:59:22.324985 | 2026-04-11 07:59:22.325101 | PLAY RECAP 2026-04-11 07:59:22.325170 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-11 07:59:22.325203 | 2026-04-11 07:59:22.450378 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-11 07:59:22.453987 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-11 07:59:23.232061 | 2026-04-11 07:59:23.232232 | PLAY [Base post] 2026-04-11 07:59:23.247023 | 2026-04-11 07:59:23.247169 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-11 07:59:24.239460 | orchestrator | changed 2026-04-11 07:59:24.251253 | 2026-04-11 07:59:24.251404 | PLAY RECAP 2026-04-11 07:59:24.251482 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-11 07:59:24.251557 | 2026-04-11 07:59:24.369920 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-11 07:59:24.372510 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-11 07:59:25.157908 | 2026-04-11 07:59:25.158164 | PLAY [Base post-logs] 2026-04-11 07:59:25.176247 | 2026-04-11 07:59:25.176485 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-11 07:59:25.641516 | localhost | changed 2026-04-11 07:59:25.652081 | 2026-04-11 07:59:25.652232 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-11 07:59:25.688850 | localhost | ok 2026-04-11 07:59:25.693024 | 2026-04-11 07:59:25.693147 | TASK [Set zuul-log-path fact] 2026-04-11 07:59:25.708958 | localhost | ok 2026-04-11 07:59:25.720138 | 2026-04-11 07:59:25.720268 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-11 07:59:25.746544 | localhost | ok 2026-04-11 07:59:25.751006 | 2026-04-11 07:59:25.751144 | TASK [upload-logs : Create log directories] 2026-04-11 07:59:26.260090 | localhost | changed 2026-04-11 07:59:26.263596 | 2026-04-11 07:59:26.263718 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-11 07:59:26.818103 | localhost -> localhost | ok: Runtime: 0:00:00.007004 2026-04-11 07:59:26.827462 | 2026-04-11 07:59:26.827657 | TASK [upload-logs : Upload logs to log server] 2026-04-11 07:59:27.402489 | localhost | Output suppressed because no_log was given 2026-04-11 07:59:27.404680 | 2026-04-11 07:59:27.404823 | LOOP [upload-logs : Compress console log and json output] 2026-04-11 07:59:27.458306 | localhost | skipping: Conditional result was False 2026-04-11 07:59:27.463708 | localhost | skipping: Conditional result was False 2026-04-11 07:59:27.476416 | 2026-04-11 07:59:27.476609 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-11 07:59:27.525318 | localhost | skipping: Conditional result was False 2026-04-11 07:59:27.526013 | 2026-04-11 07:59:27.530061 | localhost | skipping: Conditional result was False 2026-04-11 07:59:27.538572 | 2026-04-11 07:59:27.538882 | LOOP [upload-logs : Upload console log and json output]